planetary boundaries, urgence climatique, planet's limits

Covid-19 Epidemic: an early warning signal that we’ve reached the planet’s limits?

Natacha Gondran, Mines Saint-Étienne – Institut Mines-Télécom and Aurélien Boutaud, Mines Saint-Étienne – Institut Mines-Télécom

[divider style=”normal” top=”20″ bottom=”20″]

This article was published for the Fête de la Science (Science Festival, held from 2 to 12 October 2020 in mainland France and from 6 to 16 November in Corsica, overseas departments and internationally), in which The Conversation France is a partner. The theme for this year’s festival is “Planète Nature”. Read about all the events in your region at

[divider style=”normal” top=”20″ bottom=”20″]

[dropcap]W[/dropcap]hen an athlete gets too close to the limits of his body, it often reacts with an injury that forces him to rest. What athlete who has pushed himself past his limits has not been reined in by a strain, tendinitis, broken bone or other pain that has forced him to take it easy?

In ecology, there is also evidence that ecosystems send signals when they are reaching such high levels of deterioration that they cannot perform the regulatory functions that allow them to maintain their equilibrium. These are called early warning signals.

Several authors have made the connection between the Covid-19 epidemic and the decline of biodiversity, urging us to see this epidemic as an early warning signal. Evidence of a link between the current emerging zoonoses and the decline of biodiversity has existed for a number of years and that of a link between infectious diseases and climate change is emerging.

These early warning signals serve as a reminder that the planet’s capacity to absorb the pollution and deterioration to which it is subjected by humanity is not unlimited. And, as is the case for an athlete, there are dangers in getting too close to these limits.

Planetary boundaries that must not be transgressed

For over ten years, scientists from a wide range of disciplines and institutions have been working together to define a global framework for a Safe Operating Space (SOS), characterized by physical limits that humanity must respect, at the risk of seeing conditions for life on Earth become much less hospitable to human life. This framework has since been added to and updated through several publications.

These authors highlight the holistic dimension of the “Earth system”. For instance, the alteration  of land use and water cycles makes systems more sensitive to climate change. Changes in the three major global regulating systems have been well-documented – ozone layer degradation, climate change and ocean acidification.

Other cycles, which are slower and less visible, regulate the production of biomass and biodiversity, thereby contributing to the resilience of ecological systems – the biogeochemical cycles of nitrogen and phosphorous, the freshwater cycle, land use changes and the genetic and functional  integrity of the biosphere. Lastly, two phenomena present boundaries that have not yet been quantified by the scientific community: air pollution from aerosols and the introduction of novel entities (chemical or biological, for example).

These biophysical sub-systems react in a nonlinear, sometimes abrupt way, and are particularly sensitive when certain thresholds are approached. The consequences of crossing these thresholds may be irreversible and, in certain cases, could lead to huge environmental changes..

Several planetary boundaries have already been transgressed, others are on the brink

According to Steffen et al. (2015), planetary boundaries have already been overstepped in the areas of climate change, biodiversity loss, the biogeochemical cycles of nitrogen and phosphorous, and land use changes. And we are getting dangerously close to the boundaries for ocean acidification. As for the freshwater cycle, although W. Steffen et al. consider that the boundary has not yet been transgressed on the global level, the French Ministry for the Ecological and Inclusive Transition has reported that the threshold has already been crossed in France.

These transgressions cannot continue indefinitely without threatening the equilibrium of the Earth system – especially since these processes are closely interconnected.  For example, overstepping the boundaries of ocean acidification as well as those of the nitrogen and phosphorous cycles will ultimately limit the oceans’ ability to absorb atmospheric carbon dioxide. Likewise, the loss of natural land cover and deforestation reduce forests’ ability to sequester carbon and thereby limit climate change. But they also reduce local systems’ resilience to global changes.

Representation of the nine planetary boundaries (Steffen et al., 2015):

Steffen, W. et al. “A safe operating space for humanity”. Nature 461, pp. 472–475


Taking quick action to avoid the risk of drastic changes to biophysical conditions

The biological resources we depend on are undergoing rapid and unpredictable transformations within just a few human generations. These transformations may lead to the collapse of ecosystems,  food shortages and health crises that could be much worse than the one we are currently facing.  The main factors underlying these planetary impacts have been clearly identified: the increase in resource consumption, the transformation and fragmentation of natural habitats, and energy consumption.

It has also been widely established that the richest countries are primarily responsible for the ecological pressures that have led us to reach the planetary boundaries, while the poorer countries of the Global South, are primarily victims of the consequences of these degradations.

Considering the epidemic we are currently experiencing as an early warning signal should prompt us to take quick action to avoid transgressing planetary boundaries. The crisis we are facing has shown that strong policy decisions can be made in order to respect a limit – for example, the number of beds available to treat the sick. Will we be able to do as much when it comes to planetary boundaries?

The 150 citizens of the Citizens’ Convention for Climate have proposed that we “change our law so that the judicial system can take account of planetary boundaries. […] The definition of planetary boundaries can be used to establish a framework for quantifying the climate impact of human activities.” This is an ambitious goal, and it is more necessary than ever”.

[divider style=”dotted” top=”20″ bottom=”20″]

Aurélien Boutaud and Natacha Gondran are the authors of Les limites planétaires (Planetary Boundaries) published in May of 2020 by La Découverte.

Natacha Gondran is a research professor in environmental  assessment at Mines Saint-Étienne – Institut Mines-Télécom and Aurélien Boutaud, holds a PhD in environmental science and engineering from Mines Saint-Étienne – Institut Mines-Télécom.

This article has been republished from The Conversation under a Creative Commons license. Read original article (in French).


How will we interact with the virtual reality?

The ways we interact with technology change over time and adapt to fit different contexts, bringing new constraints and possibilities. Jan Gugenheimer is a researcher at Télécom Paris and is particularly fascinated by interactions between humans and machines and the way they develop. In this interview, he introduces the questions surrounding our future interactions with virtual reality.


What is the difference between virtual, mixed and augmented reality?

Jan Gugenheimer: The most commonly cited definition presents a spectrum. Reality as we perceive it is situated on the far-left and virtual reality on the far-right of the spectrum, with mixed reality in the middle. We can therefore create variations within this spectrum. When we leave more perceptions of actual reality, we move to the left of the spectrum. When we remove real aspects by adding artificial information, we move towards virtual reality. Augmented reality is just one point on this spectrum. In general, I prefer to call this spatial computing, with the underlying paradigm that information is not limited to a square space.

How do you study human-machine interactions?

JG: Our approach is to study what will happen when this technology leaves the laboratory. Computers have already made this transition: they were once large, stationary machines. A lot has changed. We now have smartphones in our pockets at all times. We can see what has changed: the input and interface. I no longer use a keyboard to enter information into my phone (input), this had to change to fit a new context. User feedback has also changed: a vibrating computer wouldn’t make sense. But telephones need this feature. These changes are made because the context changes. Our work is to explore these interactions and developments and the technology’s design.

Does this mean imagining new uses for virtual reality?

JG: Yes, and this is where our work becomes more difficult, because we have to make predictions. When we think back to the first smartphone, IBM Simon, nobody would have been able to predict what it would become, the future form of the object, or its use cases. The same is true for virtual reality. We look at the headset and think “this is our very first smartphone!” What will it become? How will we use it in our daily lives?

Do you have a specific example of these interactions?

JG: For example, when we use a virtual reality headset, we make movements in all directions. But imagine using this headset in a bus or public transport. We can’t just hit the people around us. We therefore need to adapt the way information is inputted into the technology. We propose controlling the virtual movements with finger and eye movements. We must then study what works best, to what extent this is effective in controlling the movements, the size, and whether the user can perceive his or her movements. This is a very practical aspect but there are also psychological aspects, issues of immersion into virtual reality, and user fatigue.

Are there risks of misuse?

JG: In general, this pertains to design issues since the design of the technology promotes a certain type of use. For current media, there are research topics on what we call dark design. A designer creates an interface taking certain psychological aspects into account to encourage the application to be used in a certain way, of which you are probably not aware. If you use Twitter, for example, you can “scroll” through an infinite menu, and this compels you to keep consuming.

Some then see technology as negative, when in fact it is the way it is designed and implemented that makes it what it is. We could imagine a different Twitter, for example, that would display the warning “you have been connected for a long time, time to take a break”. We wonder what this will look like for spatial computing technology. What is the equivalent of infinite scrolling for virtual reality? We then look for ways to break this cycle and protect ourselves from these psychological effects. We believe that, because these new technologies are still developing, we have the opportunity to make them better.

Should standards be established to require ethically acceptable designs?

JG: That is a big question, and we do not yet have the answer. Should we create regulations? Establish guiding principles? Standards? Raise awareness? This is an open and very interesting topic. How can we create healthier designs? These dark designs can also be used positively, for example to limit toxic behavior online, which makes it difficult to think of standards. I think transparency is crucial. Each person who uses these techniques must make it public, for example, by adding the notice “this application uses persuasive techniques to make you consume more.” This could be a solution, but there are still a lot of open questions surrounding this subject.

Can virtual reality affect our long-term behavior?

JG: Mel Slater, a researcher at the University of Barcelona, is a pioneer in this type of research. To strengthen a sense of empathy, for example, a man could use virtual reality to experience harassment from a woman’s point of view. This offers a new perspective, a different understanding. And we know that exposure to this type of experience can change our behavior outside the virtual reality situation. There is a potential for possibly making us less sexist, less racist, but this also ushers in another set of questions. What if someone uses it for the exact opposite purposes? These are delicate issues involving psychology, design issues and questions about the role we, as scientists and designers, play in the development of these technologies. And I think we should think about the potential negative uses of the technology we are bringing into the world.


Tiphaine Claveau for I’MTech

contact tracing applications

COVID-19: contact tracing applications and new conversational perimeter

The original version of this article (in French) was published in the quarterly newsletter of the Values and Policies of Personal Information Chair (no. 18, September 2020).

[divider style=”normal” top=”20″ bottom=”20″]

[dropcap]O[/dropcap]n March 11, 2020, the World Health Organization officially declared that our planet was in the midst of a pandemic caused by the spread of Covid-19. First reported in China, then Iran and Italy, the virus spread critically and quickly as it was given an opportunity. In two weeks, the number of cases outside China increased 13-fold and the number of affected countries tripled [1].

Every nation, every State, every administration, every institution, every scientist and every politician, every initiative and every willing public and private actors were called on to think and work together to fight this new scourge.

From the manufacture of masks and respirators to the pooling of resources and energy to find a vaccine, all segments of society joined altogether as our daily lives were transformed, now governed by a large-scale deadly virus. The very structure of the way we operate in society was adapted in the context of an unprecedented lockdown period.

In this collective battle, digital tools were also mobilized.

As early as March 2020, South Korea, Singapore and China announced the creation of contact tracing mobile applications to support their health policies [2].

Also in March, in Europe, Switzerland reported that it was working on the creation of the “SwissCovid” application, in partnership with EPFL University in Lausanne and ETH University in Zurich. This contact tracing application pilot project was eventually implemented on June 25. SwissCovid is designed to notify users who have been in extended contact with someone who tested positive for the virus, in order to control the spread of the virus. To quote the proponents of the application, it is “based on voluntary registration and subject to approval from Swiss Parliament.” Another noteworthy feature is that it is “based on a decentralized approach and relies on application programming interfaces (APIs) from Google and Apple.”

France, after initially dismissing this type of technological component through the Minister of the Interior, who stated that it was “foreign to French culture,” eventually changed is position and created a working group to develop a similar app called “StopCovid”.

In total, no less than 33 contact tracing apps were introduced around the world [3]. With a questionable success.

However, many voices in France, Europe and around the world, have spoken out against the implementation of this type of system, which could seriously infringe on basic rights and freedoms, especially regarding individual privacy and freedom of movement. Others have voiced concern about the possible control of this personal data by the GAFAM or States that are not committed to democratic values.

The security systems for these applications have also been widely debated and disputed, especially the risks of facing a digital virus, in addition to a biological one, due to the rushed creation of these tools.

The President of the CNIL (National Commission for Information Technology and Civil Liberties) Marie-Laure Denis, echoed the key areas for vigilance aimed at limiting potential intrusive nature of these tools.

  • First, through an opinion issued on April 24, 2020 on the principle of implementing such an application, the CNIL stated that, given the exceptional circumstances involved in managing the health crisis, it considered the implementation of StopCovid feasible. However, the Commission expressed two reservations: the application should serve the strategy of the end-of-lockdown plan and be designed in a way that protects users’ privacy [4].
  • Then, in its opinion of May 25, 2020, urgently issued for a draft decree related to the StopCovid mobile app [5], the CNIL stated that the application “can be legally deployed as soon as it is found to be a tool that supports manual health investigations and enables faster alerts in the event of contact cases with those infected with the virus, including unknown contacts.” Nevertheless, it considered that “the real usefulness of the device will need to be more specifically studied after its launch. The duration of its implementation must be dependent on the results of this regular assessment.”

From another point of view, there were those who emphasized the importance of digital solutions in limiting the spread of the virus.

No application can heal or stop Covid. Only medicine and a possible vaccine can do this. However, digital technology can certainly contribute to health policy in many ways, and it seems perfectly reasonable that the implementation of contact tracing applications came to the forefront.

What we wish to highlight here is not so much the arguments for or against the design choices in the various applications (centralized or decentralized, sovereign or otherwise) or even against their very existence (with, in each case, questionable and justified points), but the conversational scope that has played a part in all the debates surrounding their implementation.

While our technological progress is impressive in terms of scientific and engineering accomplishments, our capacity to collectively understand interactions between digital progress and our world has always raised questions within the Values and Policies of Personal Information research Chair.

It is, in fact, the very purpose of its existence and the reason why we share these issues with you.

In the midst of urgent action taken on all levels to contain, manage and–we hope–reverse the course of the pandemic, the issue of contact tracing apps has caused us to realize that the debates surrounding digital technology have perhaps finally moved on to a tangible stage involving collective reflection that is more intelligent, democratic and respectful of others.

In Europe, and also in other countries in the world, certain issues have now become part of our shared basis for conversation. These include personal data protection, individual privacy, technology that should be used, the type of data collected and its anonymization, application security, transparency, the availability of their source codes, their operating costs, whether or not to centralize data, their relationship with private or State monopolies, the need in duly justified cases for the digital tracking of populations, independence from the GAFAM [6] and United States [7] (or other third State).

In this respect, given the altogether recent nature of this situation, and our relationship with technological progress, which is no longer deified nor vilified, nor even a fantasy from an imaginary world that is obscure for many, we have progressed. Digital technology truly belongs to us. We have collectively made it ours, moving beyond both blissful techno-solutionism and irrational technophobia.

If you are not yet familiar with this specific subject, please reread the Chair’s posts on Twitter dating back to the start of the pandemic, in which we took time to identify all the elements in this conversational scope pertaining to contact tracing

The goal is not to reflect on these elements as a whole, or the tone of some of the colorful and theatrical remarks, but rather something we see as new: the quality and wealth of these remarks and their integration in a truly collective, rational and constructive debate.

It was about time!

On August 26, 2020, French Prime Minister Jean Castex made the following statement: “StopCovid did not achieve the desired results, perhaps due to a lack of communication. At the same time, we knew in advance that conducting the first full-scale trial run of this type of tool in the context of this epidemic would be particularly difficult.” [8] Given the human and financial investment, it is clear that the cost-effectiveness ratio does not help the case for StopCovid (and similar applications in other countries) [9].

Further revelations followed when the CNIL released its quarterly opinion on September 14, 2020. While, for the most part, the measures implemented (SI-DEP and Contact Covid data files, the StopCovid application) protected personal data, the Commission identified certain poor practices. It contacted the relevant agencies to ensure they would become compliant in these areas as soon as possible.

In any case, the conclusive outcome that can be factually demonstrated, is that remarkable progress has been made in our collective level of discussion, our scope for conversation in the area of digital technology. We are asking (ourselves) the right questions. Together, we are setting the terms for our objectives: what we can allow, and what we must certainly not allow.

This applies to ethical, legal and technical aspects.

It’s therefore political.

Claire Levallois-Barth and Ivan Meseguer
Co-founders of the Values and Policies of Personal Information research chair




5G: what it is? How does it work?

Xavier Lagrange, Professor of network systems, IMT Atlantique – Institut Mines-Télécom

[divider style=”normal” top=”20″ bottom=”20″]

5G is the fifth generation of standards for mobile networks. Although this technology has fueled many societal debates on its environmental impact, possible health effects, and usefulness, here we will focus on the technological aspects.

How does 5G work? Is it a true technological disruption or simple an improvement on past generations?

Back to the Past

Before examining 5G in detail, let’s take a moment to consider the previous generations. The first (1G) was introduced in the 1980s and, unlike the following generations, was an analogue system. The primary application was car telephones.

2G was introduced in 1992, with the transition to a digital system, and telephones that could make calls and send short messages (SMS). This generation also enabled the first very low speed data transmission, at the speed of the first modems for internet access.

2000 to 2010 was the 3G era. The main improvement was faster data transmission, reaching a rate of a few megabits per second with 3G⁺, allowing for smoother internet browsing. This era also brought the arrival of touch screens, causing data traffic and use of the networks to skyrocket.

Then, from 2010 until today, we transitioned to 4G, with much faster speeds of 10 megabits per second, enabling access to streaming videos.

Faster and Faster

Now we have 5G. With the same primary goal of accelerating data transmission, we should be able to reach an average speed of 100 megabits per second, with peaks at a few gigabits per second under ideal circumstances (10 to 100 times faster than 4G).

This is not a major technological disruption, but an improvement on the former generation. The technology is based on the same principle as 4G: the same waveforms will be used, and the same principle of transmission. It is called OFDM and enables parallel transmission: through mathematical processing, it can perform a large quantity of transmissions on nearby frequencies. It is therefore possible to transmit more information at once. With 4G, we were limited to 1,200 parallel transmissions. With 5G, we will reach 3,300 with a greater speed for each transmission.

Initially, 5G will complement 4G: a smartphone will be connected to 4G and transmission with 5G will only occur if a high speed is necessary and, of course, if 5G coverage is available in the area.

A more flexible network

The future network will be configurable, and therefore more flexible. Before, dedicated hardware was used to operate the networks. For example, location databases needed to contact a mobile subscriber were manufactured by telecom equipment manufacturers.

In the long term, the 5G network will make much greater use of computer virtualization technologies: the location database will be a little like an extremely secure web server that can run on one or several PCs. The same will be true for the various controllers which will guarantee proper data routing when the subscribers move to different areas in the network. The advantage is that the operator will be able to restart the virtual machines, for example in order to adapt to increased demand from users in certain areas or at certain times and, on the other hand, reduce the capacity if there is lower demand.

It will therefore be possible to reconfigure a network when there is a light load (at night for example) by combining controllers and databases in a limited number of control units, thus saving energy.

New antennas

As we have seen, 5G technology is not very different from the previous technology. It would even have been possible to develop using the same frequencies used for 3G networks.

The operators and government agencies that allocate frequencies chose to use other frequencies. This choice serves several purposes: it satisfies an ever-growing demand for speed and does not penalize users who would like to continue using older generations. Accommodating the increase in traffic requires the Hertzian spectrum (i.e., frequencies) dedicated to mobile networks to be increased. This is only possible with higher frequency ranges: 3.3 GHz, coming very soon, and likely 26 GHz in the future.

Finally, bringing new technology into operation requires a test and fine-tuning phase before the commercial launch. Transitioning to 5G on a band currently used for other technology would significantly reduce the quality perceived by users (temporarily for owners of 5G telephones, definitively for others) and create many dissatisfied customers.

There is no need to increase the number of antenna sites in order to emit new frequencies, but new antennas must be added to existing posts. These posts host a large number of small antennas and, thanks to signal processing algorithms, have a more directive coverage that can be closely controlled. The benefit is more efficient transmission in terms of speed and energy.

For a better understanding, we could use the analogy of flashlights and laser pointers. The flashlight, representing the old antennas, sends out light in all directions in a diffuse manner,  consuming a large amount of electricity and lighting a relatively short distance. The laser, on the contrary, consumes less energy to focus the light farther away, but in a very narrow line. Regardless of the antenna technology, the maximum power of the electromagnetic field produced in any direction will not be allowed to exceed maximum values for health reasons.

So, if these new antennas consume less energy, is 5G more energy efficient? We might think so since each transmission of information will consume less energy. Unfortunately, with the increasing number of exchanges, it will consume even more energy overall. Furthermore, the use of new frequencies will necessarily lead to an increase in the electric consumption of the operators.

New applications

When new technology is launched on the market, it is hard to predict all its applications. They often appear later and are caused by other stakeholders. That said, we can already imagine several possibilities.

5G will allow for a much lower latency between the sending and receiving of data. Take the example of a surgeon operating remotely with a mechanical arm. When the robot touches a part of the body, the operator will be almost instantly (a few ms for a distance of a few kilometers) be able to “feel” the resistance of what he is touching and react accordingly, as if he were operating with his own hands. Low latency is also useful for autonomous cars and remote-controlled vehicles.

For industrialists, we could imagine connected and automated factories in which numerous machines communicate together and with a global network.

5G is also one of the technologies that will enable the development of the internet of things. A city equipped with sensors can better manage a variety of aspects, including public lighting, the flow of vehicles, and garbage collection. Electricity can also be better controlled, with the real-time adaptation of consumption to production through several small, interconnected units called a smart grid.

For the general public, the network’s increased speed will allow them to download any file faster and stream premium quality videos or watch them live.

[divider style=”dotted” top=”20″ bottom=”20″]

Xavier Lagrange, Professor of network systems, IMT Atlantique – Institut Mines-Télécom

This article from The Conversation is republished here under a Creative Commons license. Read the original article (in French).


Photographie d'un train Regio2n, même modèle que le démonstrateur en résine thermoplastique développé par le projet Destiny

Trains made with recyclable parts

The Destiny project proposes a new process to manufacture parts for the railway and aeronautical industries. It uses a thermoplastic resin, which enables the materials to be recycled while limiting the pollution associated with manufacturing them.   


It is increasingly critical to be able to recycle products so as to lower the environmental cost of their production. The composite parts used in the railway sector have a service life of roughly 30 years and it is difficult and expensive to recycle them. They are mostly made from thermosetting resins — meaning they harden as the result of a chemical reaction that starts during the molding process. Once they have reached a solid state, they cannot be melted again. This means that if the parts cannot be repaired, they are destroyed.

The Destiny project brings together several industrial and academic partners[1] to respond to this need. “The goal is to be able to migrate towards recyclable materials in the railway and aeronautical industries,” says David Cnockaert, head of the project at Stratiforme Industries, a company that specializes in composite materials. Destiny won an Innovation Award at JEC World 2020 for two demonstrators made from recyclable composite materials: a regional train cabin and a railway access door.

A resin that can be melted

“An easy solution would be to use metal, which is easy to recycle,” says David Cnockaert “but we also have to take into account the requirements for this sector in terms of mass, design, thermal and acoustic aspects.” The purpose of the Destiny project is to develop a solution that can easily be tailored to current products by improving their environmental qualities . The materials used for reference parts in the railway industry are composites, made with a resin and fiber glass or carbon fiber. During the stratification stage, these fibers are impregnated with resin to form composite materials.

“In the Destiny project, we’re developing thermoplastic resins to create these parts,” says Eric Lafranche, a researcher at IMT Lille Douai who is involved in the Destiny project. Unlike thermosetting resins, thermoplastic resins develop plasticity at very high temperatures, and change from a solid to a viscous state. This means that if a train part is too damaged to be repaired, it can be reprocessed so that the recyclates can be reused.

The resin is produced by Arkema in a liquid form, with very low viscosity. “A consistency close to that of water is required to impregnate the fiberglass or carbon fibers during polymerization,” explains Eric Lafranche. “Polymerization takes place directly in the mold and this process allows us to avoid using certain components, namely those that release volatile organic compounds (VOC),” he adds. The production of VOC is therefore greatly limited in comparison with other resins. “People who work in proximity to these VOCs have protective equipment but they are still a source of pollution, so it’s better to be able to limit them,” says Eric Lafranche.

Read more on I’MTech: What is a Volatile Organic Compound (VOC)?

Tailored innovation

This thermoplastic resin provides properties that are virtually equivalent to thermosetting resins, “or even better resilience to shocks,” adds the researcher. In theory, this resin can be recycled infinitely. “In practice, it’s a bit more complicated – it can lose certain properties after being recycled repeatedly,” admits the researcher. “But these are minimal losses and we can mix this recycled material with pure material to ensure equivalent properties,” he explains.

The aim of the project is to be able to offer manufacturers recyclable materials while limiting the pollution associated with their production, but to do so by offering parts that are interchangeable with current ones. “The entire purpose of the project is to provide manufacturers with a solution that is easily accessible, which may therefore be easily tailored to current production lines,” says David Cnockaert. This means that the recyclable parts must comply with the same specifications as their thermosetting counterparts in order to be installed. This solution could also be adapted to other industries in the future. “We could consider applications in the energy, defense or medical industries, for example, for which we also manufacture composite parts,” concludes David Cnockaert.


Tiphaine Claveau for I’MTech

[1] Accredited by the i-TRANS and Aerospace Valley competitiveness clusters, the Destiny FUI project brings together Stratiforme Industries, STELIA Composite, ASMA, CANOE, Crépim, ARKEMA and an ARMINES/IMT Lille Douai research team.


Temporary tattoos for brain exploration

A team of bioelectronics researchers at Mines Saint-Étienne has developed a new type of electroencephalogram electrode using a temporary tattoo technique. As effective as traditional electrodes, but much more comfortable, they can provide extended recordings of brain activity over several days. 


The famous decalcomania transfer technique – made popular in France by the Malabar chewing gum brand in the 1970s – has recently come back into fashion with temporary tattoos. But it does not serve solely to provide fun for people all ages. A new use has been developed with the invention of temporary tattoo electrodes (TTE) designed to record electrophysiological signals.

Originally developed to pick up heart (electrocardiogram, ECG) and muscle signals (electromyogram, EMG), the technique has been refined to reach the holy Grail of bioelectronics: the brain. “Electroencephalographic signals (EEG) are the hardest to record since their amplitudes are lower and there is more background noise, so it was a real challenge for us to create flexible epidermal electronic devices that are as effective as standard electrodes,” explains Esma Ismailova a bioelectronics researcher at Mines Saint-Étienne.

From Pontedera to Saint-Étienne

The process for printing tattoo electrodes was developed by an Italian team, led by Francisco Greco, at the Italian Institute of Technology in Pontedera. The next step for preclinical application was carried out at the Saint-Etienne laboratory. Laura Ferrari, a PhD student who worked on TTE with Francisco Greco for her thesis, chose to carry out postdoctoral research with Esma Ismailova in light of her experience in the field of wearable connected electronics. In 2015, the Mines Saint-Étienne team had developed a connected textile, derived from the technique used to print on kimonos, intended to record an electrocardiogram on a moving person, with fewer artifacts than traditional ECG electrodes.

The sensors of the tattoo electrodes, like the textile electrodes, are composed of semi-conductive polymers. These organic compounds, which were the topic of the 2000 Nobel prize in chemistry, act as transistors and offer new possibilities in the field of surface electronics.  The conductive polymer used is called PEDOT:PSS. It is mixed with ink and projected on a paper sold commercially for temporary tattoos, using a regular inkjet printer. The back layer is removed at the time of application. A simple wet sponge dissolves the soluble layer composed of cellulose, and the tattoo is transferred to the skin. The materials and techniques used in the microfabrication process for TTEs make it suitable for large-scale, low-cost production.

Esma Ismailova and her team worked extensively on the assembly and interconnection between the electrodes and electronic signal recording devices. An extension ending in a plastic clip was manufactured through 3D printing and integrated in the decalcomania. The clip makes it possible to attach a wire to the tattoo: “We had to solve the problem of transmitting the signal to transfer the data. Our goal is now to develop embedded electronics along with the electrodes, a microfabricated, laminated board on the patch to collect and memorize information, or transmit it through a mobile phone,” says the Saint- Étienne researcher.

a: multi-layer structure of a TTE allowing for the transfer of the top film on which the electrode is printed b: exploded view of a TTE with integrated flat connection c: TTE transferred to the scalp in the occipital region d: close-up of a TTE 12h after application with hair regrowth

Electrodes that are more comfortable for patients…

The dry electrodes composed of a one-micron thick polymer film conform perfectly to the surface of the skin due to their flexibility. This interface makes it possible to dispense with the gel which is necessary for traditional electrodes, which dries out after a few hours, making the electrodes inoperative.  The transfer must be done on shaved skin, but a study has shown that hair regrowth through the film does not stop them from being effective. This means that they can be used for 2 to 3 days, provided they do not get wet, since the principle of temporary tattoos is that they are broken down by washing with soap and water.  Research is currently underway  to replace the regular transfer layer by a more resistant, water-repellent material, which would extend their lifetimes.

For Esma Ismailova, this technology is a huge step forward, for both the field of clinical research and patient care: “These new flexible, stretchable, very thin electrodes are ergonomic, conformable, virtually imperceptible, and are therefore much more acceptable for patients, particularly children and elderly people, for whom certain exams can be stressful.”  Indeed, to perform an EEG, patients must normally wear a headset that attaches below the chin, composed of electrodes on which the technician applies gel.

… and more effective for doctors

Another advantage of these temporary tattoo electrodes is their compatibility with  magnetoencephalography (MEG). Since they are composed entirely of organic materials and therefore do not contain any metal, they do not disturb the magnetic field generated by the device and do not create artifacts, so they can be used to perform EEGs coupled with MEGs. These two techniques for exploring neuronal activity are complementary and refine information about the  starting point of epileptic seizures,  the review of systems for certain tumors before their ablation, and neurodegenerative diseases.

The clinical assessment of TTE in the field of neurophysiology was carried out in collaboration with Jean-Michel Badier from the Institut de Neurosciences des Systèmes at the University of Aix-Marseille. This study was recently published in the journal Nature, and confirmed that their performance was similar to traditional electrodes for standard EEG, and superior for MEG, since they do not produce any shadow areas.

“We’ve done a proof of concept, now we’re trying to develop a device that can be used at home. We plan to do a study with epileptic or autistic children, for whom comfort and acceptability are very important,” explains Esma Ismailova. These tattoo electrodes – like other connected technology –will generate a great amount of data. For the researcher, “it’s essential to collaborate with researchers who can process this data using specialized algorithms. It’s a new era for smart wearables designed for personalized, preventive medicine, in particular through the early detection of abnormalities.”


Sarah Balfagon

environmental impact

20 terms for understanding the environmental impact of digital technology

While digital technology plays an essential role in our daily lives, it also a big consumer of resources. To explore the compatibility between the digital and environmental transitions, Institut Mines-Télécom and Fondation Mines-Télécom are publishing their 12th annual brochure entitled Numérique : Enjeux industriels et impératifs écologiques (Digital Technology: Industrial Challenges and Environmental Imperatives). This glossary of 20 terms taken from the brochure provides an overview of some important notions for understanding the environmental impact of digital technology.  


  1. CSR: Corporate Social Responsibility — A voluntary process whereby companies take social and environmental concerns into account in their business activities and relationships with partners.
  2. Data centers — Infrastructure bringing together the equipment required to operate an information system, such as equipment for data storage and processing.
  3. Eco-design — A way to design products or services by limiting their environmental impact as much as possible, and using as few non-renewable resources as possible.
  4. Eco-modulation — Principle of a financial bonus/penalty applied to companies based on their compliance with good environmental practices. Primarily used in the waste collection and management sector to reward companies that are concerned about the recyclability of their products.
  5. Energy mix — All energy sources used in a geographic area, combining renewable and non-renewable sources.
  6. Environmental responsibility — Behavior of a person, group or company who seeks to act in accordance with sustainable development principles.
  7. Green IT — IT practices that help reduce the environmental footprint of an organization’s operations.
  8. LCA: Lifecycle Analysis — Tool used to assess the overall environmental impacts of a product or service, throughout its phases of existence, by taking into consideration a maximum of incoming and outgoing flows of resources and energy over this period.
  9. Mine tailings — The part of the rock that is left over during mining operations since it does not have enough of the target material to be used by industry.
  10. Mining code — Legal code regulating the exploration and exploitation of mineral resources in France, dated from 2011, based on the fundamental principles of Napoleonic law of 1810.
  11. Paris Climate Agreement — International climate agreement established in 2015 following negotiations held during the Paris Climate Conference (COP21). Among other things, it sets the objective to limit global warming to 2 degrees by 2100, in comparison to preindustrial levels.
  12. PUE: Power Usage Effectiveness — Ratio between the total energy consumed by a data center to the total energy consumed by its servers alone.
  13. Rare earths— Group of 17 metals, many of which have unique properties that make them widely used in the digital sector.
  14. Rebound effect — Increased use following improvements in environmental performance (reduced energy consumption or use of resources).
  15. Responsible innovation — Way of thinking about innovation with the purpose of addressing environmental or social challenges, while considering the way the innovation itself is sought or created.
  16. RFID: Radio-frequency identification — Very short distance communication method based on micro-antennas in the form of tags.
  17. Salt flat — High salt desert, sometimes submerged in a thin layer of water, containing lithium which is highly sought after to make batteries for electronic equipment.
  18. Virtualization — The act of creating a virtual an IT action, usually through a service provider, in order to save on IT equipment costs.
  19. WEEE: Waste Electrical and Electronic Equipment — All waste from products operated using electrical current and therefore containing electric or electronic components.
  20. 5G Networks — 5th generation mobile networks, following 4G, will make it possible to improve mobile data speed and present new possibilities for using mobile networks in new sectors.
crise, gestion de crise, crisis management

Crisis management: better integration of citizens’ initiatives

Caroline Rizza, Télécom Paris – Institut Mines-Télécom

[divider style=”normal” top=”20″ bottom=”20″]

[dropcap]A[/dropcap]s part of my research into the benefits of digital technologies in crisis management and in particular the digital skills of those involved in a crisis (whether institutions or grassroots citizens), I had the opportunity to shadow the Fire and Emergency Department of the Gard (SDIS) in Nîmes, from 9 to 23 April 2020, during the COVID-19 health crisis.

This immersive investigation enabled me to fine-tune my research hypotheses on the key role of grassroots initiatives in crises, regardless of whether they emerge in the common or virtual public space.

Social media to allow immediate action by the public

So called “civil security” crises are often characterized by their rapidity (a sudden rise to a “peak”, followed by a return to “normality”), uncertainties, tensions, victims, witnesses, etc.

The scientific literature in the field has demonstrated that grassroots initiatives appear at the same time as the crisis in order to respond to it: during an earthquake or flood, members of the public who are present on-site are often the first to help the victims, and after the crisis, local people are often the ones who organize the cleaning and rebuilding of the affected area. During the Nice terror attacks of July 2016, for example, taxi-drivers responded immediately by helping to evacuate the people present on the Promenade des Anglais. A few months earlier, during the Bataclan attacks in 2015, Parisians opened their doors to those who could not go home and used the hashtag #parisportesouvertes (parisopendoors). Genoa experienced two rapid and violent floods in 1976 and 2011; on both occasions, young people volunteered to clean up the streets and help shop owners and inhabitants in the days that followed the event.

There has been an increase in these initiatives, following the arrival of social media in our daily lives, which has helped them emerge and get organized online as a complement to actions that usually arise spontaneously on the field.

My research lies within the field of “crisis informatics”. I am interested in these grassroots initiatives which emerge and are organized through social media, as well as the issues surrounding their integration into crisis management. How can we describe these initiatives? What mechanisms are they driven by? How does their creation change crisis management? Why should we integrate them into crisis response?

Social media as an infrastructure for communication and organization

Since 2018, I have been coordinating the ANR MACIV project (Citizen and volunteer management: the role of social media in crisis scenarios). We have been looking at all the aspects of social media in crisis management: the technological aspect with the tools which can automatically supply the necessary information to institutional players; the institutional aspect of the status of the information coming from social media and its use in the field; the grassroots aspect, linked to the mechanisms involved in the creation and sharing of the information on social media and the integration of grassroots initiatives into the response to the crisis.

We usually think of social media as a means of communication used by institutions (ministries, prefectures, municipalities, fire and emergency services) to communicate with citizens top-down and improve the situational analysis of the event through the information conveyed bottom-up from citizens.

The academic literature in the field of  “crisis informatics” has demonstrated the changes brought by social media, and how citizens have used them to communicate in the course of an event, provide information or organize to help.

On-line and off-line volunteers

We generally distinguish between “volunteers” in the field and online. As illustrated above, volunteers who are witnesses or victims of an event are often the first to intervene spontaneously, while social media focuses on organizing online help. This distinction can help us understand how social media have become a means of expressing and organizing solidarity.

It is interesting to note that certain groups of online volunteers are connected through agreements with public institutions and their actions are coordinated during an event. In France, VISOV (international volunteers for virtual operation support) is the French version of the European VOST (Virtual Operations Support Team); but we can also mention other groups such as the WAZE community.

Inform and organize

There is therefore an informational dimension and an organizational dimension to the contribution of social media to crisis management.

Informational in that the content that is published constitutes a source of relevant information to assess what is happening on site: for example, fire officers can use online social media, photos and videos during a fire outbreak, to readjust the means they need to deploy.

And organizational in that aim is to work together to respond to the crisis.

For example, creating a Wikipedia page about an ongoing event (and clearing up uncertainties), communicating pending an institutional response (Hurricane Irma, Cuba, in 2017), helping to evacuate a place (Gard, July 2019), taking in victims (Paris, 2015; Var, November 2019), or helping to rebuild or to clean a city (Genoa, November 2011).


Screenshot of the Facebook page of VISOV to inform citizens of available accommodation following the evacuation of certain areas in the Var in December 2019. VISOV Facebook page

An increased level of organization

During my immersion within the SDIS of the Gard as part of the management of the COVID-19 crisis, I had the chance to discuss and observe the way in which social media were used to communicate with the public (reminding them of preventative measures and giving them daily updates from regional health agencies), as well as to integrate some grassroots initiatives.

Although the crisis was a health crisis, it was also one of logistics. Many citizens (individuals, businesses, associations, etc.) organized to support the institutions: sewing masks or making them with 3D printers, turning soap production into hand sanitizer production, proposing to translate information on preventative measures into different languages and sharing it to reach as many citizens as possible; these were all initiatives which I came across and which helped institutions organize during the peak of the crisis.


Example of protective visors made using 3D printers for the SDIS 30.


The “tunnel effect”

However, the institutional actors I met and interviewed within the framework of the two studies mentioned above (SDIS, Prefecture, Defense and Security Zone, DGSCGC) all highlighted the difficulty of taking account of information shared on social media – and grassroots initiatives – during crises.

The large number of calls surrounding the same event, the excess information to be dealt with and the gravity of the situation mean that the focus has to be on the essentials. These are all examples of the “tunnel effect”, identified by these institutions as one of the main reasons for the difficulty of integrating these tools into their work and these actions into their emergency response.

The information and citizen initiatives which circulate on social media simultaneously to the event may therefore help the process of crisis management and response, but paradoxically, they can also make it more difficult.

Then there is also the sharing through social media of rumors and fake news, especially when there is a gap in the information or contradictory ideas linked to an event (go to page Wikipedia during the COVID-19 crisis).

How and why should we encourage this integration?

Citizen initiatives have impacted institutions horizontally in their professional practices.

My observation of the management of the crisis within the SDIS 30 enabled me to go one step further and put forward the hypothesis that another dimension is slowing down the integration of these initiatives which emerge in the common or virtual public space: it implies placing the public on the same level as the institution; in other words, these initiatives do not just have an “impact” horizontally on professional practices and their rules (doctrines), but this integration requires the citizen to be recognized as a participant in the management and the response to the crisis.

There is still a prevailing idea that the public needs to be protected, but the current crisis shows that the public also want to play an active role in protecting themselves and others.

The main question that then arises is that of the necessary conditions for this recognition of citizens as participants in the management and response to the crisis.

Relying on proximity

It is interesting to note that at a very local level, the integration of the public has not raised problems and on the contrary it is a good opportunity to diversify initiatives and recognize each of the participants within the region.

However, at a higher level in the operational chain of management, this poses more problems because of the commitment and responsibility of institutions in this recognition.

My second hypothesis is therefore as follows: the close relations between stakeholders within the same territorial fabric allow better familiarity with grassroots players, thereby fostering mutual trust – this trust seems to me to be the key to success and explains the successful integration of grassroots initiatives in a crisis, as illustrated by the VISOV or VOST.

[divider style=”dotted” top=”20″ bottom=”20″]

The original version of this article (in French) was published on The Conversation.
By Caroline Rizza, researcher in information sciences at Télécom Paris.


Datafarm: low-carbon energy for data centers

The start-up Datafarm proposes an energy solution for low-carbon digital technology. Within a circular economy system, it powers data centers with energy produced through methanization, by installing them directly on cattle farms.


When you hear about clean energy, cow dung probably isn’t the first thing that comes to mind. But think again! The start-up Datafarm, incubated at IMT Starter, has placed its bets on setting up facilities on farms to power its data centers through methanization. This process generates energy from the breaking down of animal or plant biomass by microorganisms under controlled conditions. Its main advantages are that it makes it possible to recover waste and lower greenhouse emissions by offering an alternative to fossil fuels. The result is a green energy in the form of biogas.

Waste as a source of energy

Datafarm’s IT infrastructures are installed on cattle farms that carry out methanization. About a hundred cows can fuel a 500kW biogas plant, which is the equivalent of 30 tons of waste per day (cow dung, waste from milk, plants etc.). This technique generates a gas, methane, of which 40% is converted into electricity by turbines and 60% into heat. Going beyond the state of the art, Datafarm has developed a process to convert the energy produced through methanization…into cold!  This helps respond to the problem of cooling data centers. “Our system allows us to reduce the proportion of electricity needed to cool infrastructures to 8%, whereas 20 to 50% is usually required,” explains Stéphane Petibon, the founder of the start-up.

The heat output produced by the data centers is then recovered in an on-site heating system. This allows farmers to dry hay to feed their livestock or produce cheese. Lastly, farms no longer need fertilizer from outside sources since the residue from the methanization process can be used to fertilize the fields. Datafarm therefore operates within a circular economy and self-sufficient energy system for the farm and the data center.

A service to help companies reduce carbon emissions

A mid-sized biogas plant (500 kW) fueling the start-up’s data centers reduces CO2 emissions by 12,000 tons a year – the equivalent of the annual emissions of 1,000 French people. “Our goal is to offer a service for low-carbon, or even negative-carbon, data centers and to therefore offset the  greenhouse gas emissions of the customers who host their data with us,” says Stéphane Petibon.

Every four years, companies with over 500 employees (approximately 2,400 in France) are required to publish their carbon footprint, which is used to assess their CO2 emissions as part of the national environmental strategy to reduce the impact of companies. The question, therefore, is no longer whether they need to reduce their carbon footprint, but how to do so. As such, the start-up provides an ecological and environmental argument for companies who need to decarbonize their operations.  “Our solution makes it possible to reduce carbon dioxide emissions by 20 to 30 % through an IT service for which companies’ needs grow every year,” says  Stéphane Petibon.

The services offered by Datafarm range from data storage to processing.  In order to respond to a majority of the customers’ demand for server colocation, the start-up has designed its infrastructures as ready-to-use modules inserted into containers hosted at farms. An agile approach that allows them to build their infrastructures based on customers’ needs and prior to installation. The data is backed up at another center powered by green energy near Amsterdam (Netherlands).

Innovations on the horizon

The two main selection criteria for farms are the power of their methanization and their proximity to a fiber network . “The French regions have already installed fiber networks in a significant portion of territories, but these networks have been neglected and are inoperative. To activate them, we’re working with the telecom operators who cover France,” explains Stéphane Petibon. The first two infrastructures, in Arzal in Brittany and in Saint-Omer in the Nord department, meet all the criteria and will be put into use in September and December 2020 respectively. The start-up plans to host up to 80 customers per infrastructure and plans to have installed seven infrastructures throughout France by the end of 2021.

To achieve this goal, the start-up is conducting research and development on network redundancy  issues to ensure service continuity in the event of a failure. It is also working on developing an energy storage technique that is more environmentally-friendly than the batteries used by the data centers.  The methanization reaction can also generate hydrogen, which the start-up plans to store to be used as a backup power supply for its infrastructures. In addition to the small units, Datafarm is working with a cooperative of five farmers to design an infrastructure that will have a much larger hosting and surface capacity than its current products.

Anaïs Culot.

[box type=”info” align=”” class=”” width=””]This article was published as part of Fondation Mines-Télécom’s 2020 brochure series dedicated to sustainable digital technology and the impact of digital technology on the environment. Through a brochure, conference-debates, and events to promote science in conjunction with IMT, this series explores the uncertainties and challenges of the digital and environmental transitions.[/box]