5G

5G: what it is? How does it work?

Xavier Lagrange, Professor of network systems, IMT Atlantique – Institut Mines-Télécom

[divider style=”normal” top=”20″ bottom=”20″]

5G is the fifth generation of standards for mobile networks. Although this technology has fueled many societal debates on its environmental impact, possible health effects, and usefulness, here we will focus on the technological aspects.

How does 5G work? Is it a true technological disruption or simple an improvement on past generations?

Back to the Past

Before examining 5G in detail, let’s take a moment to consider the previous generations. The first (1G) was introduced in the 1980s and, unlike the following generations, was an analogue system. The primary application was car telephones.

2G was introduced in 1992, with the transition to a digital system, and telephones that could make calls and send short messages (SMS). This generation also enabled the first very low speed data transmission, at the speed of the first modems for internet access.

2000 to 2010 was the 3G era. The main improvement was faster data transmission, reaching a rate of a few megabits per second with 3G⁺, allowing for smoother internet browsing. This era also brought the arrival of touch screens, causing data traffic and use of the networks to skyrocket.

Then, from 2010 until today, we transitioned to 4G, with much faster speeds of 10 megabits per second, enabling access to streaming videos.

Faster and Faster

Now we have 5G. With the same primary goal of accelerating data transmission, we should be able to reach an average speed of 100 megabits per second, with peaks at a few gigabits per second under ideal circumstances (10 to 100 times faster than 4G).

This is not a major technological disruption, but an improvement on the former generation. The technology is based on the same principle as 4G: the same waveforms will be used, and the same principle of transmission. It is called OFDM and enables parallel transmission: through mathematical processing, it can perform a large quantity of transmissions on nearby frequencies. It is therefore possible to transmit more information at once. With 4G, we were limited to 1,200 parallel transmissions. With 5G, we will reach 3,300 with a greater speed for each transmission.

Initially, 5G will complement 4G: a smartphone will be connected to 4G and transmission with 5G will only occur if a high speed is necessary and, of course, if 5G coverage is available in the area.

A more flexible network

The future network will be configurable, and therefore more flexible. Before, dedicated hardware was used to operate the networks. For example, location databases needed to contact a mobile subscriber were manufactured by telecom equipment manufacturers.

In the long term, the 5G network will make much greater use of computer virtualization technologies: the location database will be a little like an extremely secure web server that can run on one or several PCs. The same will be true for the various controllers which will guarantee proper data routing when the subscribers move to different areas in the network. The advantage is that the operator will be able to restart the virtual machines, for example in order to adapt to increased demand from users in certain areas or at certain times and, on the other hand, reduce the capacity if there is lower demand.

It will therefore be possible to reconfigure a network when there is a light load (at night for example) by combining controllers and databases in a limited number of control units, thus saving energy.

New antennas

As we have seen, 5G technology is not very different from the previous technology. It would even have been possible to develop using the same frequencies used for 3G networks.

The operators and government agencies that allocate frequencies chose to use other frequencies. This choice serves several purposes: it satisfies an ever-growing demand for speed and does not penalize users who would like to continue using older generations. Accommodating the increase in traffic requires the Hertzian spectrum (i.e., frequencies) dedicated to mobile networks to be increased. This is only possible with higher frequency ranges: 3.3 GHz, coming very soon, and likely 26 GHz in the future.

Finally, bringing new technology into operation requires a test and fine-tuning phase before the commercial launch. Transitioning to 5G on a band currently used for other technology would significantly reduce the quality perceived by users (temporarily for owners of 5G telephones, definitively for others) and create many dissatisfied customers.

There is no need to increase the number of antenna sites in order to emit new frequencies, but new antennas must be added to existing posts. These posts host a large number of small antennas and, thanks to signal processing algorithms, have a more directive coverage that can be closely controlled. The benefit is more efficient transmission in terms of speed and energy.

For a better understanding, we could use the analogy of flashlights and laser pointers. The flashlight, representing the old antennas, sends out light in all directions in a diffuse manner,  consuming a large amount of electricity and lighting a relatively short distance. The laser, on the contrary, consumes less energy to focus the light farther away, but in a very narrow line. Regardless of the antenna technology, the maximum power of the electromagnetic field produced in any direction will not be allowed to exceed maximum values for health reasons.

So, if these new antennas consume less energy, is 5G more energy efficient? We might think so since each transmission of information will consume less energy. Unfortunately, with the increasing number of exchanges, it will consume even more energy overall. Furthermore, the use of new frequencies will necessarily lead to an increase in the electric consumption of the operators.

New applications

When new technology is launched on the market, it is hard to predict all its applications. They often appear later and are caused by other stakeholders. That said, we can already imagine several possibilities.

5G will allow for a much lower latency between the sending and receiving of data. Take the example of a surgeon operating remotely with a mechanical arm. When the robot touches a part of the body, the operator will be almost instantly (a few ms for a distance of a few kilometers) be able to “feel” the resistance of what he is touching and react accordingly, as if he were operating with his own hands. Low latency is also useful for autonomous cars and remote-controlled vehicles.

For industrialists, we could imagine connected and automated factories in which numerous machines communicate together and with a global network.

5G is also one of the technologies that will enable the development of the internet of things. A city equipped with sensors can better manage a variety of aspects, including public lighting, the flow of vehicles, and garbage collection. Electricity can also be better controlled, with the real-time adaptation of consumption to production through several small, interconnected units called a smart grid.

For the general public, the network’s increased speed will allow them to download any file faster and stream premium quality videos or watch them live.

[divider style=”dotted” top=”20″ bottom=”20″]

Xavier Lagrange, Professor of network systems, IMT Atlantique – Institut Mines-Télécom

This article from The Conversation is republished here under a Creative Commons license. Read the original article (in French).

 

Photographie d'un train Regio2n, même modèle que le démonstrateur en résine thermoplastique développé par le projet Destiny

Trains made with recyclable parts

The Destiny project proposes a new process to manufacture parts for the railway and aeronautical industries. It uses a thermoplastic resin, which enables the materials to be recycled while limiting the pollution associated with manufacturing them.   

 

It is increasingly critical to be able to recycle products so as to lower the environmental cost of their production. The composite parts used in the railway sector have a service life of roughly 30 years and it is difficult and expensive to recycle them. They are mostly made from thermosetting resins — meaning they harden as the result of a chemical reaction that starts during the molding process. Once they have reached a solid state, they cannot be melted again. This means that if the parts cannot be repaired, they are destroyed.

The Destiny project brings together several industrial and academic partners[1] to respond to this need. “The goal is to be able to migrate towards recyclable materials in the railway and aeronautical industries,” says David Cnockaert, head of the project at Stratiforme Industries, a company that specializes in composite materials. Destiny won an Innovation Award at JEC World 2020 for two demonstrators made from recyclable composite materials: a regional train cabin and a railway access door.

A resin that can be melted

“An easy solution would be to use metal, which is easy to recycle,” says David Cnockaert “but we also have to take into account the requirements for this sector in terms of mass, design, thermal and acoustic aspects.” The purpose of the Destiny project is to develop a solution that can easily be tailored to current products by improving their environmental qualities . The materials used for reference parts in the railway industry are composites, made with a resin and fiber glass or carbon fiber. During the stratification stage, these fibers are impregnated with resin to form composite materials.

“In the Destiny project, we’re developing thermoplastic resins to create these parts,” says Eric Lafranche, a researcher at IMT Lille Douai who is involved in the Destiny project. Unlike thermosetting resins, thermoplastic resins develop plasticity at very high temperatures, and change from a solid to a viscous state. This means that if a train part is too damaged to be repaired, it can be reprocessed so that the recyclates can be reused.

The resin is produced by Arkema in a liquid form, with very low viscosity. “A consistency close to that of water is required to impregnate the fiberglass or carbon fibers during polymerization,” explains Eric Lafranche. “Polymerization takes place directly in the mold and this process allows us to avoid using certain components, namely those that release volatile organic compounds (VOC),” he adds. The production of VOC is therefore greatly limited in comparison with other resins. “People who work in proximity to these VOCs have protective equipment but they are still a source of pollution, so it’s better to be able to limit them,” says Eric Lafranche.

Read more on I’MTech: What is a Volatile Organic Compound (VOC)?

Tailored innovation

This thermoplastic resin provides properties that are virtually equivalent to thermosetting resins, “or even better resilience to shocks,” adds the researcher. In theory, this resin can be recycled infinitely. “In practice, it’s a bit more complicated – it can lose certain properties after being recycled repeatedly,” admits the researcher. “But these are minimal losses and we can mix this recycled material with pure material to ensure equivalent properties,” he explains.

The aim of the project is to be able to offer manufacturers recyclable materials while limiting the pollution associated with their production, but to do so by offering parts that are interchangeable with current ones. “The entire purpose of the project is to provide manufacturers with a solution that is easily accessible, which may therefore be easily tailored to current production lines,” says David Cnockaert. This means that the recyclable parts must comply with the same specifications as their thermosetting counterparts in order to be installed. This solution could also be adapted to other industries in the future. “We could consider applications in the energy, defense or medical industries, for example, for which we also manufacture composite parts,” concludes David Cnockaert.

 

Tiphaine Claveau for I’MTech

[1] Accredited by the i-TRANS and Aerospace Valley competitiveness clusters, the Destiny FUI project brings together Stratiforme Industries, STELIA Composite, ASMA, CANOE, Crépim, ARKEMA and an ARMINES/IMT Lille Douai research team.

 

Temporary tattoos for brain exploration

A team of bioelectronics researchers at Mines Saint-Étienne has developed a new type of electroencephalogram electrode using a temporary tattoo technique. As effective as traditional electrodes, but much more comfortable, they can provide extended recordings of brain activity over several days. 

 

The famous decalcomania transfer technique – made popular in France by the Malabar chewing gum brand in the 1970s – has recently come back into fashion with temporary tattoos. But it does not serve solely to provide fun for people all ages. A new use has been developed with the invention of temporary tattoo electrodes (TTE) designed to record electrophysiological signals.

Originally developed to pick up heart (electrocardiogram, ECG) and muscle signals (electromyogram, EMG), the technique has been refined to reach the holy Grail of bioelectronics: the brain. “Electroencephalographic signals (EEG) are the hardest to record since their amplitudes are lower and there is more background noise, so it was a real challenge for us to create flexible epidermal electronic devices that are as effective as standard electrodes,” explains Esma Ismailova a bioelectronics researcher at Mines Saint-Étienne.

From Pontedera to Saint-Étienne

The process for printing tattoo electrodes was developed by an Italian team, led by Francisco Greco, at the Italian Institute of Technology in Pontedera. The next step for preclinical application was carried out at the Saint-Etienne laboratory. Laura Ferrari, a PhD student who worked on TTE with Francisco Greco for her thesis, chose to carry out postdoctoral research with Esma Ismailova in light of her experience in the field of wearable connected electronics. In 2015, the Mines Saint-Étienne team had developed a connected textile, derived from the technique used to print on kimonos, intended to record an electrocardiogram on a moving person, with fewer artifacts than traditional ECG electrodes.

The sensors of the tattoo electrodes, like the textile electrodes, are composed of semi-conductive polymers. These organic compounds, which were the topic of the 2000 Nobel prize in chemistry, act as transistors and offer new possibilities in the field of surface electronics.  The conductive polymer used is called PEDOT:PSS. It is mixed with ink and projected on a paper sold commercially for temporary tattoos, using a regular inkjet printer. The back layer is removed at the time of application. A simple wet sponge dissolves the soluble layer composed of cellulose, and the tattoo is transferred to the skin. The materials and techniques used in the microfabrication process for TTEs make it suitable for large-scale, low-cost production.

Esma Ismailova and her team worked extensively on the assembly and interconnection between the electrodes and electronic signal recording devices. An extension ending in a plastic clip was manufactured through 3D printing and integrated in the decalcomania. The clip makes it possible to attach a wire to the tattoo: “We had to solve the problem of transmitting the signal to transfer the data. Our goal is now to develop embedded electronics along with the electrodes, a microfabricated, laminated board on the patch to collect and memorize information, or transmit it through a mobile phone,” says the Saint- Étienne researcher.

a: multi-layer structure of a TTE allowing for the transfer of the top film on which the electrode is printed b: exploded view of a TTE with integrated flat connection c: TTE transferred to the scalp in the occipital region d: close-up of a TTE 12h after application with hair regrowth

Electrodes that are more comfortable for patients…

The dry electrodes composed of a one-micron thick polymer film conform perfectly to the surface of the skin due to their flexibility. This interface makes it possible to dispense with the gel which is necessary for traditional electrodes, which dries out after a few hours, making the electrodes inoperative.  The transfer must be done on shaved skin, but a study has shown that hair regrowth through the film does not stop them from being effective. This means that they can be used for 2 to 3 days, provided they do not get wet, since the principle of temporary tattoos is that they are broken down by washing with soap and water.  Research is currently underway  to replace the regular transfer layer by a more resistant, water-repellent material, which would extend their lifetimes.

For Esma Ismailova, this technology is a huge step forward, for both the field of clinical research and patient care: “These new flexible, stretchable, very thin electrodes are ergonomic, conformable, virtually imperceptible, and are therefore much more acceptable for patients, particularly children and elderly people, for whom certain exams can be stressful.”  Indeed, to perform an EEG, patients must normally wear a headset that attaches below the chin, composed of electrodes on which the technician applies gel.

… and more effective for doctors

Another advantage of these temporary tattoo electrodes is their compatibility with  magnetoencephalography (MEG). Since they are composed entirely of organic materials and therefore do not contain any metal, they do not disturb the magnetic field generated by the device and do not create artifacts, so they can be used to perform EEGs coupled with MEGs. These two techniques for exploring neuronal activity are complementary and refine information about the  starting point of epileptic seizures,  the review of systems for certain tumors before their ablation, and neurodegenerative diseases.

The clinical assessment of TTE in the field of neurophysiology was carried out in collaboration with Jean-Michel Badier from the Institut de Neurosciences des Systèmes at the University of Aix-Marseille. This study was recently published in the journal Nature, and confirmed that their performance was similar to traditional electrodes for standard EEG, and superior for MEG, since they do not produce any shadow areas.

“We’ve done a proof of concept, now we’re trying to develop a device that can be used at home. We plan to do a study with epileptic or autistic children, for whom comfort and acceptability are very important,” explains Esma Ismailova. These tattoo electrodes – like other connected technology –will generate a great amount of data. For the researcher, “it’s essential to collaborate with researchers who can process this data using specialized algorithms. It’s a new era for smart wearables designed for personalized, preventive medicine, in particular through the early detection of abnormalities.”

 

Sarah Balfagon

environmental impact

20 terms for understanding the environmental impact of digital technology

While digital technology plays an essential role in our daily lives, it also a big consumer of resources. To explore the compatibility between the digital and environmental transitions, Institut Mines-Télécom and Fondation Mines-Télécom are publishing their 12th annual brochure entitled Numérique : Enjeux industriels et impératifs écologiques (Digital Technology: Industrial Challenges and Environmental Imperatives). This glossary of 20 terms taken from the brochure provides an overview of some important notions for understanding the environmental impact of digital technology.  

 

  1. CSR: Corporate Social Responsibility — A voluntary process whereby companies take social and environmental concerns into account in their business activities and relationships with partners.
  2. Data centers — Infrastructure bringing together the equipment required to operate an information system, such as equipment for data storage and processing.
  3. Eco-design — A way to design products or services by limiting their environmental impact as much as possible, and using as few non-renewable resources as possible.
  4. Eco-modulation — Principle of a financial bonus/penalty applied to companies based on their compliance with good environmental practices. Primarily used in the waste collection and management sector to reward companies that are concerned about the recyclability of their products.
  5. Energy mix — All energy sources used in a geographic area, combining renewable and non-renewable sources.
  6. Environmental responsibility — Behavior of a person, group or company who seeks to act in accordance with sustainable development principles.
  7. Green IT — IT practices that help reduce the environmental footprint of an organization’s operations.
  8. LCA: Lifecycle Analysis — Tool used to assess the overall environmental impacts of a product or service, throughout its phases of existence, by taking into consideration a maximum of incoming and outgoing flows of resources and energy over this period.
  9. Mine tailings — The part of the rock that is left over during mining operations since it does not have enough of the target material to be used by industry.
  10. Mining code — Legal code regulating the exploration and exploitation of mineral resources in France, dated from 2011, based on the fundamental principles of Napoleonic law of 1810.
  11. Paris Climate Agreement — International climate agreement established in 2015 following negotiations held during the Paris Climate Conference (COP21). Among other things, it sets the objective to limit global warming to 2 degrees by 2100, in comparison to preindustrial levels.
  12. PUE: Power Usage Effectiveness — Ratio between the total energy consumed by a data center to the total energy consumed by its servers alone.
  13. Rare earths— Group of 17 metals, many of which have unique properties that make them widely used in the digital sector.
  14. Rebound effect — Increased use following improvements in environmental performance (reduced energy consumption or use of resources).
  15. Responsible innovation — Way of thinking about innovation with the purpose of addressing environmental or social challenges, while considering the way the innovation itself is sought or created.
  16. RFID: Radio-frequency identification — Very short distance communication method based on micro-antennas in the form of tags.
  17. Salt flat — High salt desert, sometimes submerged in a thin layer of water, containing lithium which is highly sought after to make batteries for electronic equipment.
  18. Virtualization — The act of creating a virtual an IT action, usually through a service provider, in order to save on IT equipment costs.
  19. WEEE: Waste Electrical and Electronic Equipment — All waste from products operated using electrical current and therefore containing electric or electronic components.
  20. 5G Networks — 5th generation mobile networks, following 4G, will make it possible to improve mobile data speed and present new possibilities for using mobile networks in new sectors.
crise, gestion de crise, crisis management

Crisis management: better integration of citizens’ initiatives

Caroline Rizza, Télécom Paris – Institut Mines-Télécom

[divider style=”normal” top=”20″ bottom=”20″]

[dropcap]A[/dropcap]s part of my research into the benefits of digital technologies in crisis management and in particular the digital skills of those involved in a crisis (whether institutions or grassroots citizens), I had the opportunity to shadow the Fire and Emergency Department of the Gard (SDIS) in Nîmes, from 9 to 23 April 2020, during the COVID-19 health crisis.

This immersive investigation enabled me to fine-tune my research hypotheses on the key role of grassroots initiatives in crises, regardless of whether they emerge in the common or virtual public space.

Social media to allow immediate action by the public

So called “civil security” crises are often characterized by their rapidity (a sudden rise to a “peak”, followed by a return to “normality”), uncertainties, tensions, victims, witnesses, etc.

The scientific literature in the field has demonstrated that grassroots initiatives appear at the same time as the crisis in order to respond to it: during an earthquake or flood, members of the public who are present on-site are often the first to help the victims, and after the crisis, local people are often the ones who organize the cleaning and rebuilding of the affected area. During the Nice terror attacks of July 2016, for example, taxi-drivers responded immediately by helping to evacuate the people present on the Promenade des Anglais. A few months earlier, during the Bataclan attacks in 2015, Parisians opened their doors to those who could not go home and used the hashtag #parisportesouvertes (parisopendoors). Genoa experienced two rapid and violent floods in 1976 and 2011; on both occasions, young people volunteered to clean up the streets and help shop owners and inhabitants in the days that followed the event.

There has been an increase in these initiatives, following the arrival of social media in our daily lives, which has helped them emerge and get organized online as a complement to actions that usually arise spontaneously on the field.

My research lies within the field of “crisis informatics”. I am interested in these grassroots initiatives which emerge and are organized through social media, as well as the issues surrounding their integration into crisis management. How can we describe these initiatives? What mechanisms are they driven by? How does their creation change crisis management? Why should we integrate them into crisis response?

Social media as an infrastructure for communication and organization

Since 2018, I have been coordinating the ANR MACIV project (Citizen and volunteer management: the role of social media in crisis scenarios). We have been looking at all the aspects of social media in crisis management: the technological aspect with the tools which can automatically supply the necessary information to institutional players; the institutional aspect of the status of the information coming from social media and its use in the field; the grassroots aspect, linked to the mechanisms involved in the creation and sharing of the information on social media and the integration of grassroots initiatives into the response to the crisis.

We usually think of social media as a means of communication used by institutions (ministries, prefectures, municipalities, fire and emergency services) to communicate with citizens top-down and improve the situational analysis of the event through the information conveyed bottom-up from citizens.

The academic literature in the field of  “crisis informatics” has demonstrated the changes brought by social media, and how citizens have used them to communicate in the course of an event, provide information or organize to help.

On-line and off-line volunteers

We generally distinguish between “volunteers” in the field and online. As illustrated above, volunteers who are witnesses or victims of an event are often the first to intervene spontaneously, while social media focuses on organizing online help. This distinction can help us understand how social media have become a means of expressing and organizing solidarity.

It is interesting to note that certain groups of online volunteers are connected through agreements with public institutions and their actions are coordinated during an event. In France, VISOV (international volunteers for virtual operation support) is the French version of the European VOST (Virtual Operations Support Team); but we can also mention other groups such as the WAZE community.

Inform and organize

There is therefore an informational dimension and an organizational dimension to the contribution of social media to crisis management.

Informational in that the content that is published constitutes a source of relevant information to assess what is happening on site: for example, fire officers can use online social media, photos and videos during a fire outbreak, to readjust the means they need to deploy.

And organizational in that aim is to work together to respond to the crisis.

For example, creating a Wikipedia page about an ongoing event (and clearing up uncertainties), communicating pending an institutional response (Hurricane Irma, Cuba, in 2017), helping to evacuate a place (Gard, July 2019), taking in victims (Paris, 2015; Var, November 2019), or helping to rebuild or to clean a city (Genoa, November 2011).

crise

Screenshot of the Facebook page of VISOV to inform citizens of available accommodation following the evacuation of certain areas in the Var in December 2019. VISOV Facebook page

An increased level of organization

During my immersion within the SDIS of the Gard as part of the management of the COVID-19 crisis, I had the chance to discuss and observe the way in which social media were used to communicate with the public (reminding them of preventative measures and giving them daily updates from regional health agencies), as well as to integrate some grassroots initiatives.

Although the crisis was a health crisis, it was also one of logistics. Many citizens (individuals, businesses, associations, etc.) organized to support the institutions: sewing masks or making them with 3D printers, turning soap production into hand sanitizer production, proposing to translate information on preventative measures into different languages and sharing it to reach as many citizens as possible; these were all initiatives which I came across and which helped institutions organize during the peak of the crisis.

crise

Example of protective visors made using 3D printers for the SDIS 30.

 

The “tunnel effect”

However, the institutional actors I met and interviewed within the framework of the two studies mentioned above (SDIS, Prefecture, Defense and Security Zone, DGSCGC) all highlighted the difficulty of taking account of information shared on social media – and grassroots initiatives – during crises.

The large number of calls surrounding the same event, the excess information to be dealt with and the gravity of the situation mean that the focus has to be on the essentials. These are all examples of the “tunnel effect”, identified by these institutions as one of the main reasons for the difficulty of integrating these tools into their work and these actions into their emergency response.

The information and citizen initiatives which circulate on social media simultaneously to the event may therefore help the process of crisis management and response, but paradoxically, they can also make it more difficult.

Then there is also the sharing through social media of rumors and fake news, especially when there is a gap in the information or contradictory ideas linked to an event (go to page Wikipedia during the COVID-19 crisis).

How and why should we encourage this integration?

Citizen initiatives have impacted institutions horizontally in their professional practices.

My observation of the management of the crisis within the SDIS 30 enabled me to go one step further and put forward the hypothesis that another dimension is slowing down the integration of these initiatives which emerge in the common or virtual public space: it implies placing the public on the same level as the institution; in other words, these initiatives do not just have an “impact” horizontally on professional practices and their rules (doctrines), but this integration requires the citizen to be recognized as a participant in the management and the response to the crisis.

There is still a prevailing idea that the public needs to be protected, but the current crisis shows that the public also want to play an active role in protecting themselves and others.

The main question that then arises is that of the necessary conditions for this recognition of citizens as participants in the management and response to the crisis.

Relying on proximity

It is interesting to note that at a very local level, the integration of the public has not raised problems and on the contrary it is a good opportunity to diversify initiatives and recognize each of the participants within the region.

However, at a higher level in the operational chain of management, this poses more problems because of the commitment and responsibility of institutions in this recognition.

My second hypothesis is therefore as follows: the close relations between stakeholders within the same territorial fabric allow better familiarity with grassroots players, thereby fostering mutual trust – this trust seems to me to be the key to success and explains the successful integration of grassroots initiatives in a crisis, as illustrated by the VISOV or VOST.

[divider style=”dotted” top=”20″ bottom=”20″]

The original version of this article (in French) was published on The Conversation.
By Caroline Rizza, researcher in information sciences at Télécom Paris.

Datafarm

Datafarm: low-carbon energy for data centers

The start-up Datafarm proposes an energy solution for low-carbon digital technology. Within a circular economy system, it powers data centers with energy produced through methanization, by installing them directly on cattle farms.

 

When you hear about clean energy, cow dung probably isn’t the first thing that comes to mind. But think again! The start-up Datafarm, incubated at IMT Starter, has placed its bets on setting up facilities on farms to power its data centers through methanization. This process generates energy from the breaking down of animal or plant biomass by microorganisms under controlled conditions. Its main advantages are that it makes it possible to recover waste and lower greenhouse emissions by offering an alternative to fossil fuels. The result is a green energy in the form of biogas.

Waste as a source of energy

Datafarm’s IT infrastructures are installed on cattle farms that carry out methanization. About a hundred cows can fuel a 500kW biogas plant, which is the equivalent of 30 tons of waste per day (cow dung, waste from milk, plants etc.). This technique generates a gas, methane, of which 40% is converted into electricity by turbines and 60% into heat. Going beyond the state of the art, Datafarm has developed a process to convert the energy produced through methanization…into cold!  This helps respond to the problem of cooling data centers. “Our system allows us to reduce the proportion of electricity needed to cool infrastructures to 8%, whereas 20 to 50% is usually required,” explains Stéphane Petibon, the founder of the start-up.

The heat output produced by the data centers is then recovered in an on-site heating system. This allows farmers to dry hay to feed their livestock or produce cheese. Lastly, farms no longer need fertilizer from outside sources since the residue from the methanization process can be used to fertilize the fields. Datafarm therefore operates within a circular economy and self-sufficient energy system for the farm and the data center.

A service to help companies reduce carbon emissions

A mid-sized biogas plant (500 kW) fueling the start-up’s data centers reduces CO2 emissions by 12,000 tons a year – the equivalent of the annual emissions of 1,000 French people. “Our goal is to offer a service for low-carbon, or even negative-carbon, data centers and to therefore offset the  greenhouse gas emissions of the customers who host their data with us,” says Stéphane Petibon.

Every four years, companies with over 500 employees (approximately 2,400 in France) are required to publish their carbon footprint, which is used to assess their CO2 emissions as part of the national environmental strategy to reduce the impact of companies. The question, therefore, is no longer whether they need to reduce their carbon footprint, but how to do so. As such, the start-up provides an ecological and environmental argument for companies who need to decarbonize their operations.  “Our solution makes it possible to reduce carbon dioxide emissions by 20 to 30 % through an IT service for which companies’ needs grow every year,” says  Stéphane Petibon.

The services offered by Datafarm range from data storage to processing.  In order to respond to a majority of the customers’ demand for server colocation, the start-up has designed its infrastructures as ready-to-use modules inserted into containers hosted at farms. An agile approach that allows them to build their infrastructures based on customers’ needs and prior to installation. The data is backed up at another center powered by green energy near Amsterdam (Netherlands).

Innovations on the horizon

The two main selection criteria for farms are the power of their methanization and their proximity to a fiber network . “The French regions have already installed fiber networks in a significant portion of territories, but these networks have been neglected and are inoperative. To activate them, we’re working with the telecom operators who cover France,” explains Stéphane Petibon. The first two infrastructures, in Arzal in Brittany and in Saint-Omer in the Nord department, meet all the criteria and will be put into use in September and December 2020 respectively. The start-up plans to host up to 80 customers per infrastructure and plans to have installed seven infrastructures throughout France by the end of 2021.

To achieve this goal, the start-up is conducting research and development on network redundancy  issues to ensure service continuity in the event of a failure. It is also working on developing an energy storage technique that is more environmentally-friendly than the batteries used by the data centers.  The methanization reaction can also generate hydrogen, which the start-up plans to store to be used as a backup power supply for its infrastructures. In addition to the small units, Datafarm is working with a cooperative of five farmers to design an infrastructure that will have a much larger hosting and surface capacity than its current products.

Anaïs Culot.

[box type=”info” align=”” class=”” width=””]This article was published as part of Fondation Mines-Télécom’s 2020 brochure series dedicated to sustainable digital technology and the impact of digital technology on the environment. Through a brochure, conference-debates, and events to promote science in conjunction with IMT, this series explores the uncertainties and challenges of the digital and environmental transitions.[/box]

silent cities project

Locked-down world, silent cities

Last spring, France decided to impose a lockdown to respond to the health crisis. Our cities came to a standstill and cars disappeared from the streets, allowing residents to rediscover quieter sounds like birdsong. A team of researchers decided to take advantage of this calm that suddenly settled over our lives to better understand the impacts of sound pollution, and created the Silent Cities project.

 

When the lockdown was announced and France was getting ready to come to a halt, a team of researchers launched a collaborative, interdisciplinary project: Silent Cities. The team includes Samuel Challéat,¹ Nicolas Farrugia,² Jérémy Froidevaux³ and Amandine Gasc,4 researchers in environmental geography, artificial intelligence, biology and ecology, respectively. The aim of their project is to record the sounds heard in cities around the world to study the impacts that lockdown and social distancing measures may have on noise pollution. The project also seeks to assess the effects of the variation of our activities on other animal species as our lives gradually return to normal.

Listening to cities

“We had to develop a standard protocol to obtain high-quality recordings for the analyses, but they also had to be light and easy to implement during the lockdown,” explains Nicolas Farrugia, a researcher in machine learning and deep learning at IMT Atlantique. Due to the lockdown, it was not possible to go directly into the field to carry out these acoustic surveys. A collaborative system was set up to allow  a large number of participants around the world to take part in the project by making recordings from their homes. The four researchers provided a collaborative platform so that the participants could then upload their recordings.

Interactive map of the Silent Cities project participants around the world.

The researchers analyzed and compared recordings at different sites using what they call ecoacoustic indices. These are mathematical values. The higher they are, the more they show the diversity and complexity of sounds in an acoustic survey. “Still using an open-access approach, we used a code base  to develop an algorithm that would automatically calculate these ecoacoustic indices in order to catalogue our recordings” explains Nicolas Farrugia.

“The goal is to run audio-tagging algorithms to automatically recognize and tag different sounds heard in a recording,” he adds. This makes it possible to obtain a fairly accurate identification of sound sources, indicating, for example, the presence of a car, a raven’s caw or a discussion between several people in a sound survey.

This type of algorithm based on deep neural networks has become increasingly popular in recent years. For acoustic ecologists, they provide recognition that is relatively accurate, and more importantly, multi-targeted: the algorithm is able to seek many different sounds at the same time to tag all the acoustic surveys. “We can also use them as a filter if we want to find all the recordings where we hear a raven. That could be useful for measuring the appearance of a species, by visualizing the time, date or location,” says Nicolas Farrugia.

The contribution of artificial intelligence is also a help to estimate the frequency of different categories of sounds  — for automobile traffic for example — and visualize the increase or decrease. During the lockdown, the researchers clearly observed a drop in automobile traffic and now expect to see it go back up as our lives are gradually returning to normal. What they are interested in is being able to visualize how this may disturb the behavior of other animal species.

What changes?

“Some studies have shown that in urban environments, birds can change the frequency or time of day at which they communicate, due to ambient noise,” says Nicolas Farrugia. The sound of human activities, saturating the urban environment can, for example, make it difficult for certain species to reproduce. “That said, it’s hard to talk about causality since, in normal times, we can’t listen to urban ecosystems without the contribution of human activities.”  It is therefore usually difficult for eco-acoustics researchers to fully understand the biodiversity of our cities.

In this respect, the Silent Cities project provides an opportunity to directly study the variation in human activity and how it impacts ecosystems. Some of the measures put in place to respond to the health crisis could subsequently be promoted for ecological reasons. One such example is cycling, which is now being encouraged  through financial assistance to repair old bicycles and creating new cycle paths. Another example is initiatives to establish staggered working hours, which would also limit the associated noise pollution. One of the possible prospects of the project is to inform discussions about how urban environments should be organized.

” Samuel Challéat, the researcher who initiated this project, works on light pollution and what we can be done to limit artificial light,” he adds. For example — like “green and blue belts,” which seek to promote the preservation of so-called “ordinary” biodiversity including in urban environments — he is currently working on an emerging planning tool, the “black belt,” which aims to restore nocturnal ecological continuity which has been harmed by artificial light. Since we know that the sounds created by human activities disturb certain ecological processes, this reasoning on ecological continuity could be transferred to the field of eco-acoustics, where the challenge would be to work to maintain or restore spaces free from any noise pollution. The data and results of the Silent Cities project could help provide insights in this area.

By Tiphaine Claveau

 

¹Samuel Challéat, Environmental Geography, University of Toulouse 2, CNRS, GEODE (guest researcher), Toulouse, France

²Nicolas Farrugia, Machine Learning & Deep Learning, IMT Atlantique, CNRS, Lab-STICC, Brest, France

³Jérémy Froidevaux, Conservation Biology, University of Bristol, School of Biological Sciences, Bristol, UK

4Amandine Gasc, Conservation Ecology, Aix Marseille University, Avignon University, CNRS, IRD, IMBE, Marseille, France

 

data sharing

Data sharing, a common European challenge

Promoting data sharing between economic players is one of Europe’s major objectives via its digital governance strategy. To accomplish this, there are two specific challenges to be met. Firstly, a community must be created around data issues, bringing together various stakeholders from multiple sectors. Secondly, the technological choices implemented by these stakeholders must be harmonised.

 

‘If we want more efficient algorithms, with qualified uncertainty and reduced bias, we need not only more data, but more diverse data’, explains Sylvain Le Corff. This statistics researcher at Télécom SudParis thus raises the whole challenge around data sharing. This need applies not only to researchers. Industrial players must also strengthen their data with that from their ecosystem. For instance, an energy producer will benefit greatly from industrial data sharing with suppliers or consumer groups, and vice versa. A car manufacturer will become all the more efficient with more data sources from their sub-contractors.

The problem is that this sharing of data is far from being a trivial operation. The reason lies in the numerous technical solutions that exist to produce, store and use data. The long-standing and over-riding idea for economic players was to try to exploit their data themselves, and each organisation therefore made personal choices in terms of architecture, format or data-related protocols. An algorithm developed to exploit data sets in a specific format cannot use data packaged in another format. This then calls for a major harmonisation phase.

‘This technical aspect is often under-estimated in data sharing considerations’, Sylvain Le Corff comments. ‘Yet we are aware that there is a real difficulty with the pre-treatment operation to harmonise data.’ The researcher quotes the example of automatic language analysis, a key issue for artificial intelligence, which relies on the automatic processing of texts from multiple sources: raw texts, texts generated by audio or video documents, or texts derived from other texts, etc. This is the notion of multi-modality. ‘The plurality of sources is well-managed in the field, but the manner in which we oversee this multi-modality can vary within the same sector.’ Two laboratories or two companies will therefore not harmonise their data in the same way. In order to work together, there is an absolute need to go through this fastidious pre-treatment, which can hamper collaboration.

A European data standard

Olivier Boissier, a researcher in artificial intelligence and inter-operability at Mines Saint-Étienne, adds another factor to this issue: ‘The people who help to produce or process data are not necessarily data or AI specialists. In general, they are people with high expertise in the field of application, but don’t always know how to open or pool data sets.’ Given such technical limitations, a promising approach consists in standardising practices. This task is being taken on by the International Data Spaces Association (IDSA), whose role is to promote data sharing on a global scale, and more particularly in Europe.

Contrary to what one might assume, the idea of a data standard does not mean imposing a single norm on data format, architecture or protocol. Each sector has already worked on ontologies to help facilitate dialogue between data sets. ‘Our intention is not to provide yet another ontology’, explains Antoine Garnier, project head at IDSA. ‘What we are offering is more of a meta-model which enables a description of data sets based on those sector ontologies, and with an agnostic approach in terms of the sectors it targets.’

This standard could be seen as a list of conditions on which to base data use. To summarise the conditions in IDSA’s architectural model, ‘the three cornerstones are the inter-operability, certification and governance of data’, says Antoine Garnier. Thanks to this approach, the resulting standard serves as a guarantee of quality between players. It enables users to determine rapidly whether an organisation fulfils these conditions and is thus trustworthy. This system also raises the question of security, which is one of the primary concerns of organisations who agree to open their data.

Europe, the great lake region of data?

While developing a standard is a step forward in technical terms, it remains to be put into actual use. For this, its design must incorporate the technical, legal, economic and political concerns of European data stakeholders – producers and users alike. Hence the importance of creating a community consisting of as many organisations as possible. In Europe, since 2020, this community has had a name, Gaia-X, an association of players, including together IMT and IDSA in particular, to structure efforts around the federation of data, software and infrastructure clouds. Via Gaia-X, public and private organisations aim to roll out standardisation actions, using the IDSA standard among others, but may also implement research, training or awareness activities.

‘This is such a vast issue that if we want to find a solution, we must approach it through a community of experts in security, inter-operability, governance and data analysis’ Olivier Boissier points out, emphasising the importance of dialogue between specialists around this topic. Alongside their involvement in Gaia-X, IMT and IDSA are organising a winter school from 2 to 4 December to raise awareness among young researchers of data-sharing issues (see insert below). With the support of the German-French Academy for the Industry of the Future, it will provide the keys to understanding technical and human issues, through concrete cases. ‘Within the research community, we are used to taking part in conferences to keep up to date on the state of play of our field, but it is difficult to have a deeper understanding of the problems faced by other fields’, Sylvain Le Corff admits. ‘This type of Franco-German event is essential to structuring the European community and forming a global understanding of an issue, by taking a step back from our own area of expertise.’ 

The European Commission has made no secret of its ambition to create a space for the free circulation of data within Europe. In other words, a common environment in which personal and confidential data would be secured, but also in which organisations would have easy access to a significant amount of industrial data. To achieve this idyllic scenario of cooperation between data players, the collective participation of organisations is an absolute prerequisite. For academics, the communitarian approach is a core practice and does not represent a major challenge. For businesses, however, there remains a certain number of stakeholders to win over. The majority of major industries have understood the benefits of data sharing, ‘but some companies still see data as a monetizable war treasure that they must avoid sharing’, says Antoine Garnier. ‘We must take an informative approach and shatter preconceived ideas.’

Read on I’MTech: Data sharing: an important issue for the agricultural sector

What about non-European players? When we speak about data sharing, we systematically refer to the cloud, a market cornered by three American players, Amazon, Microsoft and Google, behind which we find other American stakeholders (IBM and Oracle) and a handful of Chinese interests such as Alibaba and Tencent. How do we convince these ‘hyper-scalers’ (the title refers to their ability to scale up to meet growing demand, regardless of the sector) to adopt a standard which is not their own, when they are the owners of the technology upon which the majority of data use is based? ‘Paradoxically, we are perhaps not such bad news for them’ Antoine Garnier assures us. ‘Along with this standard, we are also offering a form of certification. For players suffering from a negative image, this allows them to demonstrate compliance with the rules.’

This standardisation strategy also impacts European digital sovereignty and the transmission of its values. In the same way as Europe succeeded in imposing a personal data protection standard in the 2010s with the formalisation of the GDPR, it is currently working to define a standard around industrial data sharing. Its approach to this task is identical, i.e. to make standardisation a guarantee of security and responsible management. ‘A standard is often perceived as a constraint, but it is above all a form of freedom’ concludes Olivier Boissier. ‘By adopting a standard, we free ourselves of the technical and legal constraints specific to each given use.’

[box type=”info” align=”” class=”” width=””]From 2 to 4 December: a winter school on data sharing

Around the core theme of Data Analytics & AI, IMT and TU Dortmund are organising a winter school on data sharing for industrial systems, from 2 to 4 December 2020, in collaboration with IDSA, the German-French Academy for the Industry of the Future and with the support of the Franco-German University. Geared towards doctoral students and young researchers, its aim is to open perspectives and establish a state of play on the question of data exchange between European stakeholders. Through the participation of various European experts, this winter school will examine the technical, economic and ethical aspects of data sharing by bringing together the field expertise of researchers and industrial players.

Information and registration

[/box]

Dagobah

DAGOBAH: Tables, AI will understand

Human activities produce massive amounts of raw data presented in the form of tables. In order to understand these tables quickly, EURECOM and Orange are developing DAGOBAH, a semantic annotation platform. It aims to develop a generic solution that can optimize AI applications such as personal assistants, and facilitate the management of complex data sets of any company.

 

On a day-to-day basis, online keyword searches often suffice to make up for our thousands of memory lapses, clear up any doubts we may have or satisfy our curiosity. The results even anticipate our needs by offering more information than we asked for: a singer’s biography, a few song titles, upcoming concert dates etc. But have you ever wondered how the search engine always provides an answer to your questions? In order to display the most relevant results, computer programs must understand the meaning and nuances of data (often in the form of tables) so that they can answer users’ queries. This is one of the key goals of the DAGOBAH platform, created through a partnership between EURECOM and Orange research teams in 2019.

DAGOBAH’s aim is to automatically understand the tabular data produced by humans. Since there is a lack of explicit context for this type of data – compared to a text – understanding it depends on the reader’s knowledge. “Humans know how to detect the orientation of a table, the presence of headings or merging lines, relationships between columns etc. Our goal is to teach computers how to make such natural interpretations,” says Raphaël Troncy, a data science researcher at Eurecom.

The art of leveraging encyclopedic knowledge

After identifying a table’s form, DAGOBAH tries to understand its content. Take two columns, for example. The first lists names of directors and the second, film titles. How does DAGOBAH go about interpreting this data set without knowing its nature or content? It performs a semantic annotation, which means that it effectively applies a label to each item in the table. To do so, it must determine the nature of a column’s content (directors’ names etc.) and the relationship between the two columns. In this case: director – directed – film. But an item may mean different things. For example, “Lincoln” refers to a last name, a British or American city, the title of a Steven Spielberg film etc. In short, the platform must resolve any ambiguity about the content of a cell based on the overall context.

To achieve its goal, DAGOBAH searches existing encyclopedic knowledge bases (Wikidata, DBpedia). In these bases, knowledge is often formalized and associated with attributes: “Wes Anderson” is associated with “director.” To process a new table, DAGOBAH compares each item to its database and proposes possible candidates for attributes: “film title”, “city” etc. But they must remain simply candidates. Then, for each column, the candidates are grouped together and put to a majority vote. The nature being sought is therefore deduced with a varying degree of probability.

However, there are limitations to this method when it comes to complex tables. Beyond applications for the general public, industrial data may contain statistics related to business-specific knowledge or highly specialized scientific data that is difficult to identify.

Neural networks to the rescue  

To reduce the risk of ambiguity, DAGOBAH uses neural networks and a word embedding technique. The principle: represent a cell’s content in the form of a vector in multidimensional space.  Within this space, vectors of two words that are semantically close to one another are grouped together geometrically in the same place. Visually speaking, the directors are grouped together, as are the film titles. Applying this principle to DAGOBAH is based on the assumption that items in the same column must be similar enough to form a coherent whole. “To remove ambiguity between candidates, categories of candidates are grouped together in vector space. The problem is then to select the most relevant group in the context of the given table,” explains Thomas Labbé, a data scientist at Orange. This method becomes more effective than a simple search with a majority vote when there is little information available about the context of a table.

However, one of the drawbacks of using deep learning is the lack of visibility about what happens inside the neural network. “We change the hyperparameters, turning them like oven dials to obtain better results. The process is highly empirical and takes a long time since we repeat the experiment over and over again,” explains Raphaël Troncy. The approach is also time-consuming in terms of computing time. The teams are also working on scaling up the process. As such, Orange’s dedicated big data infrastructures are a major asset.  Ultimately, the researchers seek to implement an all-purpose approach, created in an end-to-end way and which is generic enough to meet the needs of highly diverse applications.

Towards industrial applications

The semantic interpretation of tables is a goal but not an end. “Working with EURECOM allows us to have almost real-time knowledge about the latest academic advances as well as an informed opinion on the technical approaches we plan to use,” says Yoan Chabot, a researcher in artificial intelligence at Orange. DAGOBAH’s use of encyclopedic data makes it possible to optimize question/response engines in the kind of natural language used by voice assistants. But the holy grail will be to provide an automatic processing solution for business-specific knowledge in an industrial environment. “Our solution will be able to address the private sector market, not just the public sector, for internal use by companies who produce massive amounts of tabular data,” adds Yoan Chabot.

This will be a major challenge, since industry does not have knowledge graphs to which DAGOBAH may refer. The next step will therefore be to succeed in semantically annotating data sets using knowledge bases in their embryonic stages. To achieve their goals, for the second year in a row the academic and industry partners have committed to take part in an international semantic annotation challenge, a very popular topic in the scientific community. For four months, they will have the opportunity to test their approach in real-life conditions and will compare their results with the rest of the international community in November.

To learn earn more: DAGOBAH: Make Tabular Data Speak Great Again

Anaïs Culot for I’MTech

Photographie de l'océan

Smarter models of the ocean

The ocean is a system that is difficult to observe, whose biodiversity and physical phenomena we still know very little about. Artificial intelligence could be an asset in understanding this environment better. Ronan Fablet, a researcher at IMT Atlantique, presents the projects of the new Océanix Research Chair. What is the objective? To use AI to optimize models for observing the ocean.

 

More than 70 % of the surface area of our planet is occupied by oceans and seas. They make up a colossal system that we know little about. The TARA expedition discovered hundreds of millions of previously unknown species of plankton, as our ability to explore the ocean floor remains limited. This is also the case with observing physical phenomena such as the dynamics of ocean currents for example, on the surface or at depth.

And yet, understanding ocean dynamics is essential for a good understanding of ecological aspects, biodiversity and ecosystems. But unlike the atmosphere, which can be observed directly, it is difficult to study the ocean. Space technologies offer some visibility of the ocean surface, including surface currents and winds, but can see nothing below. In addition, orbiting satellites capture images as they pass over certain areas but cannot provide instantaneous observation of the entire globe, and the presence of clouds can obscure the visibility of the oceans. As for beacons and buoys, some of these recover information up to 2,000 meters deep, but this remains very occasional.

Using AI to see the unknown

No observation system can provide a high-resolution image of the oceans all around the globe, everywhere and all the time,” says Ronan Fablet, signal and communications researcher at IMT Atlantique. And even decades from now I don’t think that will be possible, if we use only physical observations.” The solution is artificial intelligence: AI could make it possible to optimize observation systems and reconstruct missing data based on the observed data. Ronan Fablet launched the Océanix chair at IMT Atlantique in order to investigate this further, in collaboration with numerous institutional partners (CNES, École Navale, ENSTA Bretagne, Ifremer, IRD, ESA) and industrial partners (Argans, CLS, e-odyn, ITE-FEM, MOi, Microsoft, NavalGroup, ODL, OceanNext, Scalian).

Machine learning is a way of estimating parameters to get the best prediction of an unknown, for example at a time in the future. This works like image recognition models: “We could feed the model a lot of pictures of dogs, for example, so that it learns to recognize them,” Ronan Fablet explains. The difference here is that we’re working on systems with larger dimensions, and images of the ocean.”

Take the example of an oil spill. To find out how the oil will drift through the ocean after a spill, researchers use simulations based on physical models related to fluid dynamics. “These models are either difficult to solve or difficult to calibrate, and may require unknown parameters,” he says. Machine learning techniques should make it possible to develop digital models that are more compact, and therefore faster in simulation. This would make it easier to simulate the physical processes involved in the drift of an oil slick.

Read on I’MTech: Marine oil pollution detected from space

This also applies to obtaining better representations of climate variability, which involves very broad temporalities. “The objective is to use the data available today, and to couple it with machine learning techniques to find the missing information, to better understand the situation tomorrow”.

A better view of sea routes

Model optimization and data reconstruction are also of great interest in vessel traffic monitoring. Possible applications are the detection of abnormal behavior, such as a fishing vessel changing course or stopping; or the illegal behavior of a vessel entering a restricted area. “It is unimaginable to equip an entire maritime route as we would a motorway to monitor traffic. Observation is therefore based on other space technologies,” says the researcher.

In the field of maritime traffic, there are two main types of information: AIS (Automatic Identification System) signals and satellite imagery. Every shipping vessel is required to transmit an AIS signal to locate it, but vessels smuggling cargo usually turn off this signal. Among other things, satellite imagery allows us to observe whether or not the vessels that have navigated in an area have transmitted or not, by comparing the image with AIS signals.

This type of study on abnormal behavior related to AIS signals was the subject of the ANR Astrid Sesame project. “We have applied specific neural networks to learning data, particularly in western Brittany, to learn what normal ship behavior is,” says Ronan Fablet. The aim is then to identify behaviors that deviate from the norm, even if they are infrequent or of very low probability. An abnormal event would then send an alert to a monitoring software to determine whether specific actions are required, such as sending a patrol.

Applications of artificial intelligence in oceanography are developing more significantly today as the ability to link neural networks and mathematical models used in oceanography becomes more explicit and easier to implement. The Oceanix research chair at IMT Atlantique brings together institutions specialized in aspects of artificial intelligence and others more focused on oceanography.

Some teams have been working together for several years, such as Ifremer with IMT Atlantique. These studies will make it possible to provide answers where analytical models cannot, and to speed up calculations considerably. Ronan Fablet adds that “the Holy Grail for our teams would be to identify new laws for physical, biogeochemical or ecological processes. To be able to identify new models directly from the data – representations corresponding to a general rule”.

 

Tiphaine Claveau