Airstream Alvie Cobbaï

Airstream, Alvie and Cobbaï supported by the IMT Digital honor loan scheme

The members of the IMT Digital Fund IGEU, IMT and Fondation Mines-Télécom – held a meeting on 17 November. During the meeting, three start-ups from the Télécom Paris incubator were selected to receive support through seven honor loans for a total sum of €120,000.

 

[one_half][box type=”shadow” align=”” class=”” width=””]

Airstream is a new-generation project management platform that allows companies to better coordinate work packages and business teams during complex projects and programs. The start-up will receive a €40,000 honor loan. Find out more

[/box][/one_half][one_half_last][box type=”shadow” align=”” class=”” width=””]

Alvie proposes HYGO, a solution that turns any sprayer into a smart sprayer and helps farmers optimize the quantity of phytosanitary products used and increase the efficiency of bio-control for organic farming. Alvie will receive three honor loans for a total sum of €40,000. Find out more

[/box][/one_half_last]

[box type=”shadow” align=”” class=”” width=””]

Cobbaï proposes a SaaS for industrial actors to automate the analysis of their corporate textual data and boost their quality, maintenance and after-sales service performances. The start-up will receive three honor loans for a total sum of €40,000. Find out more

[/box]

Arsenic

Arsenic contamination of water: Detection and treatment challenges

Arsenic contamination of water, whether surface water or ground water, affects many parts of France. Such contamination may be the result of anthropogenic causes, linked to mining operations for example, or of natural causes in relation to changes in geological formations, as is the case in many countries such as India, Pakistan and Chili. In partnership with the Nuclear Materials Authority in Egypt and Guangxi University in China, a team of researchers from IMT Mines Alès has developed processes for treating contaminated water and detecting arsenic.

Arsenic may be present naturally in water used for irrigation and animal-rearing. In the majority of cases, these concentrations are low enough – less than 0.01 milligrams per liter – that they do not pose a risk to humans. Hoever, overexposure can have dramatic effects on fauna, flora and human health, by causing cancer and dermatological conditions for example.

Emblematic cases of anthropogenic contamination have been reported in relation to mining activities. In 2019, near the gold mine in Salsigne, in the south of France, children were overexposed to arsenic, probably spread through dust and the dissolution of arsenic from mine tailings. But natural phenomena, such as the infiltration of rainwater, the erosion of geological formations and soil leaching may also be the cause of contamination in many countries like India, China and Chili. 

“Unfortunately, the countries faced with this environmental and health challenge are often poor countries that don’t have access to the most advanced techniques for decontaminating water,” says Eric Guibal, a researcher at IMT Mines Alès“In these countries, communities often rely on low-tech processes to reduce the toxicity of catchment water. Trade-offs are made between effectiveness and production and operation costs, for example through pumping and filtration using iron oxides” he explains. Along with a team of colleagues and in partnership with the Nuclear Materials Authority in Egypt and Guangxi University in China, he has contributed to the development of a number of innovative materials for the detection and treatment of arsenic in contaminated water.

Materials that “like” arsenic

“What we’re proposing is not a technique that can replace those we already have to capture arsenic, but a complementary technique in order to improve decontamination,” adds Eric Guibal. These research teams have developed a range of adsorbants, materials that can bind ions or molecules, in order to capture arsenic. Based on the use of biopolymers (extracts of seaweed and crustacean shells), they provide an alternative to those produced from petroleum resources. “Replacing petroleum-sourced materials with renewable resources is, in itself, an important challenge for the future,” he says.

Read more on I’MTech: When plants help us fight pollution

“Environmentally-friendly management has brought us to the limits of these materials; so as to avoid depleting the biotype, we limit the scope of our processes to applications such as polishing treatment and for “niches,” he explains. Combining these processes with more conventional techniques  (including precipitation, filtration, etc.) makes it possible to significantly reduce the toxicity of effluents and their discharge and improve the quality of catchment water. The targeted field of application is therefore  in line with local applications limited to areas where water quality is critical.

Various materials have been developed, for example combining the biopolymer - chitosan, a crustacean shell extract  - with metal ions - such as molybdate - which have a particular affinity for arsenic ions. When they come together, the different ions form a complex, which, once immobilized, makes it possible to recover arsenic in solution. Furthermore, the synthesis of nanocomposites in the form of hollow spheres combining molybdate with other compounds (silicate and cellulose acetate) produces nano-objects that can be used for the detection, analysis and recovery of arsenic in solutions  with low levels of contamination.

Exemple d'adsorbants microporeux à base d'algues (alginate) pour l'arsenic.
Example of microporous adsorbants made from seaweed (alginate).

Another more recently developed material is based on the functionalization of a composite, a technique used to give a material certain specific properties.  By combining seaweed with a synthetic polymer, this adsorbant material makes it possible to extract arsenic in order to decontaminate effluent or catchment water. The challenge is to release the arsenic once the material has been saturated, in order to concentrate it and recycle the adsorbant for new treatment cycles.

From the lab to the world

Another advantage of these technologies is that they offer a primary biological material that provides an alternative to the more polluting processes currently in use.  But the innovation is struggling to gain acceptance outside the laboratory, in particular since manufacturers may be reluctant to change their processes. “It’s a process that companies don’t know about and there’s a fear that there won’t be the same reproducibility ,” he adds. The variability of the resource can be a deterrent for manufacturers in terms of ensuring production and reproducible properties.

But beyond that, there is also a certain competitiveness in terms of production lines. Manufacturers already have a process that works and for which there is a market, and may not necessarily feel a need to change in order to find alternatives to petroleum-based processes. “And yet, they must be proactive and anticipate the future of petroleum-based resources,” says Eric Guibal. “There are also profitability issues involved, and if the environmental cost of the processes were taken into account, this innovation may be more appealing,”  he concludes.

Tiphaine Claveau for I’MTech

Flood, soil erosion

Agricultural sediments transported by rivers

The QuASPEr project studied the Canche river basin in northern France to better understand the phenomenon of soil erosion and its related consequences. This knowledge aims to develop effective land management methods for municipalities without negatively affecting farmers’ work.

 

Heavy rain, dry land, sloping ground that has been tilled… and the soil erodes. This phenomenon of abrasion, in particular of agricultural soils, leads to a loss of fertile soils but also lowers water quality in rivers. For local stakeholders, this can be seen in the form of mud flows with silting of waterways and damage to infrastructures. Northern France, which is particularly  affected by this phenomenon, was the subject of the QuASPEr project (French acronym for Quantification, Analysis and Monitoring of  Erosive Processes), in partnership with the joint association Symcea, the Artois-Picardie Water Agency and two IMT schools. Claire Alary and Christine Franke, researchers at IMT Lille Douai and Mines ParisTech respectively, have studied the Canche river watershed to better understand this erosion and develop effective soil retention strategies. The team also includes Edouard Patault, a PhD student who studied the Canche river watershed for his thesis research.

“The aim of this project is to characterize, model and predict this phenomenon of erosion,” says Christine Franke. “And due to climate change, these phenomena will evolve – and not necessarily for the better. It’s important to have a clear understanding of these mechanisms to propose management plans,” adds  Claire Alary. The two researchers have been working on the topic for several years in an effort to gain a better understanding of the highest-risk areas in the region.

Soil erosion

To study the phenomenon of erosion, the first thing that is necessary is a good understanding of the area. “It’s not necessarily a phenomenon of heavy rain that causes this erosion,” says Christine Franke. A number of parameters must be taken into account and this phenomenon is highly variable. The first rains of the season can often trigger erosion since the soil is dry and erodes more easily. The duration and intensity of the rain are significant factors but the type of soil is also very important: soil composition, vegetation cover, the degree of slope of the land etc.

This soil erosion is a recurring problem and it is difficult to identify the causes. A watershed like that of the Canache river is divided into a number of small basins, and the researchers’ goal is to find out the precise area from which the eroded particles come. To do so, it is critical to have a precise understanding of the system at each instant. Installing a monitoring station allows for such an understanding, but very locally, and stations are too expensive to be placed throughout the watershed. They must therefore be combined with other techniques to get a clear view of the system and its variability over time. The researchers studied this through the magnetic fingerprint of the sediments, a technique they adapted to this erosion phenomenon.

“We installed a trap in the river to collect these sediments suspended in the water and then sent them to the laboratory to be studied,” explains  Christine Franke. The method is relatively easy to implement and accessible for the municipalities. Moreover, it is a non-destructive method, meaning that researchers can carry out several analyses on a single sample. In practice, what they study is the mineralogy of iron in the samples. “The iron particles present in agricultural soils are not the same as those found naturally in the river,” she adds.

They have a distinctive signature that allows researchers to differentiate between particles from rivers and from fields. “The eroded particles from the field will keep this signature for a fairly long time once they’re in the river,” explains the researcher. This erosion phenomenon is also characterized by a procession of geochemical elements for each material arriving in the river. “We know the chemical characteristics of the sources of the material, so we’re able to trace this signal back to determine the contributions of the sources from which the sediments originate,” says Claire Alary.

Photograph of a gully in the Canche river watershed.

Photograph of a gully in the Canche river watershed

 

Land management

This project seeks to better understand how the system functions overall in order to identify the most problematic areas and propose adapted solutions. Certain features have been installed to limit erosion, such as hedges and fascines. Fascines are bundles of branches arranged in a line to retain soil and combat erosion. While these measures are effective, they are not enough to prevent damage. By gaining a better understanding of erosion and of the watershed, complementary retention methods could be found to enhance the effectiveness of current methods.

The possibilities for this project continue today with the launch of the GeSS (Managing Sediments at the Source) project run by the Ecosed Digital 4.0 chair, led by Nor-Edine Abriak, a researcher at IMT Lille Douai, along with Fondation Mines Télécom. The challenge is to tackle this phenomenon of erosion at the source and in particular to work on reducing the transfer of sediments for better management of this phenomenon in the various regions.

 

Tiphaine Claveau

Europes green deal

Digital technology, the gap in Europe’s Green Deal

Fabrice Flipo, Institut Mines-Télécom Business School

[divider style=”normal” top=”20″ bottom=”20″]

[dropcap]D[/dropcap]espite the Paris Agreement, greenhouse gas emissions are currently at their highest. Further action must be taken in order to stay under the 1.5°C threshold of global warming. But thanks to the recent European Green Deal aimed at reaching carbon neutrality within 30 years, Europe now seems to be taking on its responsibilities and setting itself high goals to tackle contemporary and future environmental challenges.

The aim is to become a society which is “fair, prosperous and a modern, resource-efficient and competitive economy”. This should make the European Union a global leader in the field of the “green economy”, with citizens being placed at the heart of a “sustainable and inclusive growth”.

The deal’s promise

How can such a feat be achieved?

The green deal is set within a long-term political framework for energy efficiency, waste, eco-conception, circular economy, public purchase and consumer education. Thanks to these objectives, the UE  aims to reach the long-awaited decoupling:

“A direct consequence of the regulations put in place between 1990 and 2016 is that energy consumption has decreased by almost 2% and greenhouse gas emissions by 22%, while GDP has increased by 54% […]. The percentage of renewable energy has gone from representing 9% of total energy consumption in 2005 to 17% today.”

With the Green Deal the aim is to continue this effort via ever-increasing renewable energies, energy efficiency and green products. The sectors of textiles, building and electronics are now the center of attention as part of a circular economy framework, with a strong focus on repair and reuse, driven by incentives for businesses and consumers.

Within this framework, energy efficiency measures should reduce our energy consumption by half, with the focus on energy labels and the savings they have made possible.

According to the Green Deal, the increased use of renewable energy sources should enable us to bring the share of fossil fuels down to just 20%. The use of electricity will be encouraged as an energy carrier, and 80% of it should be renewable by 2050. Energy consumption should be cut by 28% from its current levels. Hydrogen, carbon storage and varied processes for the chemical conversion of electricity into combustible materials will be used additionally, enabling an increase in the capacity and flexibility of storage.

In this new order, a number of roles have been identified: on one side, the producers of clean products, and on the other, the citizens who will buy them. In addition to this mobilization of producers and of consumers, national budgets, European funding and “green” (private) finance will commit to the cause; the framework of this commitment is expected to be put in place by the end of 2020.

Efficiency, renewable energy, a sharp decrease in energy consumption, promises of new jobs: if we remember that back in the 1970s, EDF was simply planning on building 200 nuclear power plants by the year 2000 – following a mindset which associated consumption and progress – everything now suggests that supporters of the Negawatt scenario (NGOs, ecologists, networks of committed local authorities, businesses and workers) have won a battle which is historic, cultural (in terms of values and realization of what is at stake) and political (backed by official texts).

The trajectory of GHG in a 1.5°C global warming scenario.

 

According to the deal, savings made on fossil fuels could reach between €150 billion and €200 billion per year, to which would be added the amount of health costs that will be avoided, amounting to €200 billion a year and the prospect of exporting “green” products.. Finally, millions of jobs may be created, with retraining mechanisms for the sectors that are the most impacted, and support for low-income households.

Putting the deal to the test

A final victory? On paper, everything points that way.

However, it is not as simple at it seems, and the UE itself recognizes that improvements in the field of energy efficiency and the decrease in glasshouse gas emissions are currently stalling..

This is due to the following factors, in order of importance: economic growth; the decrease in energy efficiency savings, especially in the airline industry; the sharp increase in the number of SUVs; and finally, the upward adjustment of real vehicle emissions, following the “diesel gate” scandal (+30 %).

More seriously, the EU’s net emissions, which include those generated by imports and exports, have risen by 8% during the 1990-2010 period.

Efficiency therefore has its limits and savings are more likely to be made at the start than at the end.

The digital technology challenge

According to the Green Deal, ‘Digital technologies are a critical enabler for attaining the sustainability goals of the Green deal in many different sectors”: 5G, CCTV, Internet of things, cloud computing or AI. We have our doubts, however, as to whether that is true.

Several studies, including by the Shift Project, show that emissions from the digital sector have doubled between 2010 and 2020. They are now higher than those produced by the much-criticized civil aviation sector. The digital applications put forward by the European Green Deal are some of the most energy consuming, according to several case scenarios.

Can the increase in usage be offset by energy efficiency? The sector has seen tremendous progress, on a scale not seen in any other field. The first computer, the ENIAC, weighed 30 tons, consumed 150,000 watts and could not do more than 5,000 operations per second. A modern PC consumes 200 to 300 W, for the same available power as a supercomputer of the early 2000s which consumed 1.5 MW! Progress knows no bounds…

However, the absolute limit (the “Landauer limit”) was identified in 1961 and confirmed in 2012. According to the semiconductor industry itself, the limit is fast approaching in terms of the timeframe for the Green Deal, at a time when traffic and calculation power are increasing exponentially. Is it therefore reasonable to continue becoming increasingly dependent on digital technologies, in the hope that efficiency curves might reveal energy consumption “laws”?

Especially when we consider that the gains obtained in terms of energy efficiency have little to do with any shift towards more ecology-oriented lifestyles: the motivations have been cost, heat removal and the need to make sure our digital devices could be mobile so as to keep our attention at all times.

These limitations on efficiency explain the increased interest in more sparing use of digital technologies. The Conseil National du Numérique presented its roadmap shortly after Germany. However, the Green Deal is stubbornly following the same path: a path which consists in relying on an imaginary digital sector which has little in common with the realities of the sector.

Digital technologies, facilitating growth

Drawing from a recent article, the Shift Project sends a warning: “Up until now, rebound effects have tuned out to exceed the gains brought by technological innovation.” This conclusion has once more been recently confirmed.

For example, the environmental benefits of distance working have in fact been much smaller than those we were expecting intuitively, especially when not combined with other changes in the social ecosystem. Another example is that in its 2019 “current” scenario, the OECD predicted a threefold increase in passenger transport between 2015 and 2050, facilitated (and not impeded) by autonomous vehicles.

Digital technologies are a growth factor first and foremost, as Pascal Lamy, then Head of the WTO, said when he stated that globalization is based on two innovations: Internet and the container. An increase in digital technologies will lead to more emissions. And if this is not the case, it will be because of a change in how we approach ecology, including digital technologies.

We are justified in asking the question of what it is the Green Deal is really trying to protect: the climate or the digital markets for big corporations?

[divider style=”dotted” top=”20″ bottom=”20″]

Fabrice Flipo, Professor of social and political philosophy, epistemology and history of science and technology  at Institut Mines-Télécom Business School

This article is republished from The Conversation under the Creative Commons license. Read the original article (in French) here.

IA TV

The automatic semantics of images

Recognizing faces, objects, patterns, music, architecture, or even camera movements: thanks to progress in artificial intelligence, every plan or sequence in a video can now be characterized. In the IA TV joint laboratory created last October between France Télévisions and Télécom SudParis, researchers are currently developing an algorithm capable of analyzing the range of fiction programs offered by the national broadcaster.

 

As the number of online video-on-demand platforms has increased, recommendation algorithms have been developed to go with them, and are now capable of identifying (amongst other things) viewers’ preferences in terms of genre, actors or themes, boosting the chances of picking the right program. Artificial intelligence now goes one step further by identifying the plot’s location, the type of shots and actions, or the sequence of scenes.

The teams of France Télévisions and Télécom SudParis have been working towards this goal since October 2019, when the IA TV joint laboratory was created. Their work focuses on automating the analysis of the video contents of fiction programs. “Today, our recommendation settings are very basic. If a viewer liked a type of content, program, film or documentary, we do not know much about the reasons why they liked it, nor about the characteristics of the actual content. There are so many different dimensions which might have appealed to them – the period, cast or plot,” points out Matthieu Parmentier, Head of the Data & AI Department at France Télévisions.

AI applied to fiction contents

The aim of the partnership is to explore these dimensions. Using deep learning, a neural network technique, researchers are applying algorithms to a massive quantity of videos. The different successive layers of neurons can extract and analyze increasingly complex features of visual scenes: the first layer extracts the image’s pixels, while the last attaches labels to them.

Thanks to this technology, we are now able to sort contents into categories, which means that we can classify each sequence, each scene in order to identify, for example, whether it was shot outside or inside, recognize the characters/actors involved, identify objects or locations of interest and the relationships between them, or even extract emotional or aesthetic features. Our goal is to make the machine capable of progressing automatically towards interpreting scenes in a way that is semantically close to that of humans”, says Titus Zaharia, a researcher at Télécom SudParis and specialist in AI applied to multimedia content.

Researchers have already obtained convincing results. Is this scene set in a car? In a park? Inside a bus? The tool can suggest the most relevant categories by order of probability. The algorithm can also determine the types of shots in the sequences analyzed: wide, general or close-up shots. “This did not exist until now on the market,” says Matthieu Parmentier enthusiastically. “And as well as detecting changes from one scene to another, the algorithm can also identify changes of shot within the same scene.

According to France Télévisions, there are many possible applications. Firstly, the automatic extraction of the key frames, meaning the most representative image to illustrate the content of a fiction, for each sequence and according to aesthetic criteria. Then there is the identification of the “ideal” moments in a program to insert ad breaks. “Currently, we are working on fixed video shots, but one of our next aims is to be able to characterize moving shots such as zooms, traveling or panoramic shots. This could be very interesting for us, as it could help to edit or reuse contents”, adds Matthieu Parmentier.

Multimodal AI solutions

In order to adapt to the new digital habits of viewers, the teams of France Télévisions and Télécom SudParis have been working together for over five years. They have contributed to the creation of artificial intelligence solutions and tools applied to digital images, but also to other forms of content, texts and sounds. In 2014, the two entities launched a collaborative project, Média4Dplayer, a prototype of a media player designed for all four types of screens (TV, PC, tablet and smartphone). This would be accessible to all, and especially to elderly people or people with disabilities. A few months later, they were looking into the automatic generation of subtitles. The are several advantages to this: equal access to content and the possibility to view a video without sound.

In the case of television news, for example, subtitles are generated live by professionals typing, but as we have all seen, this can sometimes lead to errors or to delays between what is heard and what appears on screen,” explains Titus Zaharia. The solution developed by the two teams allows automatic synchronization for the Replay content offered by France TV. The teams were able to file a joint patent after two and a half years of development.

In time, we are hoping to be able to offer perfectly synchronized subtitles just a few seconds after the broadcast of any type of live television program,” continues Matthieu Parmentier.

France Télévisions still has issues to be addressed by scientific research and especially artificial intelligence. What we are interested in is developing tools which can be used and put on the market rapidly, but also tools that will be sufficiently general in their methodology to find other fields of application in the future,” concludes Titus Zaharia.

 

 

planetary boundaries, urgence climatique, planet's limits

Covid-19 Epidemic: an early warning signal that we’ve reached the planet’s limits?

Natacha Gondran, Mines Saint-Étienne – Institut Mines-Télécom and Aurélien Boutaud, Mines Saint-Étienne – Institut Mines-Télécom

[divider style=”normal” top=”20″ bottom=”20″]

This article was published for the Fête de la Science (Science Festival, held from 2 to 12 October 2020 in mainland France and from 6 to 16 November in Corsica, overseas departments and internationally), in which The Conversation France is a partner. The theme for this year’s festival is “Planète Nature”. Read about all the events in your region at Fetedelascience.fr.

[divider style=”normal” top=”20″ bottom=”20″]

[dropcap]W[/dropcap]hen an athlete gets too close to the limits of his body, it often reacts with an injury that forces him to rest. What athlete who has pushed himself past his limits has not been reined in by a strain, tendinitis, broken bone or other pain that has forced him to take it easy?

In ecology, there is also evidence that ecosystems send signals when they are reaching such high levels of deterioration that they cannot perform the regulatory functions that allow them to maintain their equilibrium. These are called early warning signals.

Several authors have made the connection between the Covid-19 epidemic and the decline of biodiversity, urging us to see this epidemic as an early warning signal. Evidence of a link between the current emerging zoonoses and the decline of biodiversity has existed for a number of years and that of a link between infectious diseases and climate change is emerging.

These early warning signals serve as a reminder that the planet’s capacity to absorb the pollution and deterioration to which it is subjected by humanity is not unlimited. And, as is the case for an athlete, there are dangers in getting too close to these limits.

Planetary boundaries that must not be transgressed

For over ten years, scientists from a wide range of disciplines and institutions have been working together to define a global framework for a Safe Operating Space (SOS), characterized by physical limits that humanity must respect, at the risk of seeing conditions for life on Earth become much less hospitable to human life. This framework has since been added to and updated through several publications.

These authors highlight the holistic dimension of the “Earth system”. For instance, the alteration  of land use and water cycles makes systems more sensitive to climate change. Changes in the three major global regulating systems have been well-documented – ozone layer degradation, climate change and ocean acidification.

Other cycles, which are slower and less visible, regulate the production of biomass and biodiversity, thereby contributing to the resilience of ecological systems – the biogeochemical cycles of nitrogen and phosphorous, the freshwater cycle, land use changes and the genetic and functional  integrity of the biosphere. Lastly, two phenomena present boundaries that have not yet been quantified by the scientific community: air pollution from aerosols and the introduction of novel entities (chemical or biological, for example).

These biophysical sub-systems react in a nonlinear, sometimes abrupt way, and are particularly sensitive when certain thresholds are approached. The consequences of crossing these thresholds may be irreversible and, in certain cases, could lead to huge environmental changes..

Several planetary boundaries have already been transgressed, others are on the brink

According to Steffen et al. (2015), planetary boundaries have already been overstepped in the areas of climate change, biodiversity loss, the biogeochemical cycles of nitrogen and phosphorous, and land use changes. And we are getting dangerously close to the boundaries for ocean acidification. As for the freshwater cycle, although W. Steffen et al. consider that the boundary has not yet been transgressed on the global level, the French Ministry for the Ecological and Inclusive Transition has reported that the threshold has already been crossed in France.

These transgressions cannot continue indefinitely without threatening the equilibrium of the Earth system – especially since these processes are closely interconnected.  For example, overstepping the boundaries of ocean acidification as well as those of the nitrogen and phosphorous cycles will ultimately limit the oceans’ ability to absorb atmospheric carbon dioxide. Likewise, the loss of natural land cover and deforestation reduce forests’ ability to sequester carbon and thereby limit climate change. But they also reduce local systems’ resilience to global changes.

Representation of the nine planetary boundaries (Steffen et al., 2015):

Steffen, W. et al. “A safe operating space for humanity”. Nature 461, pp. 472–475

 

Taking quick action to avoid the risk of drastic changes to biophysical conditions

The biological resources we depend on are undergoing rapid and unpredictable transformations within just a few human generations. These transformations may lead to the collapse of ecosystems,  food shortages and health crises that could be much worse than the one we are currently facing.  The main factors underlying these planetary impacts have been clearly identified: the increase in resource consumption, the transformation and fragmentation of natural habitats, and energy consumption.

It has also been widely established that the richest countries are primarily responsible for the ecological pressures that have led us to reach the planetary boundaries, while the poorer countries of the Global South, are primarily victims of the consequences of these degradations.

Considering the epidemic we are currently experiencing as an early warning signal should prompt us to take quick action to avoid transgressing planetary boundaries. The crisis we are facing has shown that strong policy decisions can be made in order to respect a limit – for example, the number of beds available to treat the sick. Will we be able to do as much when it comes to planetary boundaries?

The 150 citizens of the Citizens’ Convention for Climate have proposed that we “change our law so that the judicial system can take account of planetary boundaries. […] The definition of planetary boundaries can be used to establish a framework for quantifying the climate impact of human activities.” This is an ambitious goal, and it is more necessary than ever”.

[divider style=”dotted” top=”20″ bottom=”20″]

Aurélien Boutaud and Natacha Gondran are the authors of Les limites planétaires (Planetary Boundaries) published in May of 2020 by La Découverte.

Natacha Gondran is a research professor in environmental  assessment at Mines Saint-Étienne – Institut Mines-Télécom and Aurélien Boutaud, holds a PhD in environmental science and engineering from Mines Saint-Étienne – Institut Mines-Télécom.

This article has been republished from The Conversation under a Creative Commons license. Read original article (in French).

interactions

How will we interact with the virtual reality?

The ways we interact with technology change over time and adapt to fit different contexts, bringing new constraints and possibilities. Jan Gugenheimer is a researcher at Télécom Paris and is particularly fascinated by interactions between humans and machines and the way they develop. In this interview, he introduces the questions surrounding our future interactions with virtual reality.

 

What is the difference between virtual, mixed and augmented reality?

Jan Gugenheimer: The most commonly cited definition presents a spectrum. Reality as we perceive it is situated on the far-left and virtual reality on the far-right of the spectrum, with mixed reality in the middle. We can therefore create variations within this spectrum. When we leave more perceptions of actual reality, we move to the left of the spectrum. When we remove real aspects by adding artificial information, we move towards virtual reality. Augmented reality is just one point on this spectrum. In general, I prefer to call this spatial computing, with the underlying paradigm that information is not limited to a square space.

How do you study human-machine interactions?

JG: Our approach is to study what will happen when this technology leaves the laboratory. Computers have already made this transition: they were once large, stationary machines. A lot has changed. We now have smartphones in our pockets at all times. We can see what has changed: the input and interface. I no longer use a keyboard to enter information into my phone (input), this had to change to fit a new context. User feedback has also changed: a vibrating computer wouldn’t make sense. But telephones need this feature. These changes are made because the context changes. Our work is to explore these interactions and developments and the technology’s design.

Does this mean imagining new uses for virtual reality?

JG: Yes, and this is where our work becomes more difficult, because we have to make predictions. When we think back to the first smartphone, IBM Simon, nobody would have been able to predict what it would become, the future form of the object, or its use cases. The same is true for virtual reality. We look at the headset and think “this is our very first smartphone!” What will it become? How will we use it in our daily lives?

Do you have a specific example of these interactions?

JG: For example, when we use a virtual reality headset, we make movements in all directions. But imagine using this headset in a bus or public transport. We can’t just hit the people around us. We therefore need to adapt the way information is inputted into the technology. We propose controlling the virtual movements with finger and eye movements. We must then study what works best, to what extent this is effective in controlling the movements, the size, and whether the user can perceive his or her movements. This is a very practical aspect but there are also psychological aspects, issues of immersion into virtual reality, and user fatigue.

Are there risks of misuse?

JG: In general, this pertains to design issues since the design of the technology promotes a certain type of use. For current media, there are research topics on what we call dark design. A designer creates an interface taking certain psychological aspects into account to encourage the application to be used in a certain way, of which you are probably not aware. If you use Twitter, for example, you can “scroll” through an infinite menu, and this compels you to keep consuming.

Some then see technology as negative, when in fact it is the way it is designed and implemented that makes it what it is. We could imagine a different Twitter, for example, that would display the warning “you have been connected for a long time, time to take a break”. We wonder what this will look like for spatial computing technology. What is the equivalent of infinite scrolling for virtual reality? We then look for ways to break this cycle and protect ourselves from these psychological effects. We believe that, because these new technologies are still developing, we have the opportunity to make them better.

Should standards be established to require ethically acceptable designs?

JG: That is a big question, and we do not yet have the answer. Should we create regulations? Establish guiding principles? Standards? Raise awareness? This is an open and very interesting topic. How can we create healthier designs? These dark designs can also be used positively, for example to limit toxic behavior online, which makes it difficult to think of standards. I think transparency is crucial. Each person who uses these techniques must make it public, for example, by adding the notice “this application uses persuasive techniques to make you consume more.” This could be a solution, but there are still a lot of open questions surrounding this subject.

Can virtual reality affect our long-term behavior?

JG: Mel Slater, a researcher at the University of Barcelona, is a pioneer in this type of research. To strengthen a sense of empathy, for example, a man could use virtual reality to experience harassment from a woman’s point of view. This offers a new perspective, a different understanding. And we know that exposure to this type of experience can change our behavior outside the virtual reality situation. There is a potential for possibly making us less sexist, less racist, but this also ushers in another set of questions. What if someone uses it for the exact opposite purposes? These are delicate issues involving psychology, design issues and questions about the role we, as scientists and designers, play in the development of these technologies. And I think we should think about the potential negative uses of the technology we are bringing into the world.

 

Tiphaine Claveau for I’MTech

contact tracing applications

COVID-19: contact tracing applications and new conversational perimeter

The original version of this article (in French) was published in the quarterly newsletter of the Values and Policies of Personal Information Chair (no. 18, September 2020).

[divider style=”normal” top=”20″ bottom=”20″]

[dropcap]O[/dropcap]n March 11, 2020, the World Health Organization officially declared that our planet was in the midst of a pandemic caused by the spread of Covid-19. First reported in China, then Iran and Italy, the virus spread critically and quickly as it was given an opportunity. In two weeks, the number of cases outside China increased 13-fold and the number of affected countries tripled [1].

Every nation, every State, every administration, every institution, every scientist and every politician, every initiative and every willing public and private actors were called on to think and work together to fight this new scourge.

From the manufacture of masks and respirators to the pooling of resources and energy to find a vaccine, all segments of society joined altogether as our daily lives were transformed, now governed by a large-scale deadly virus. The very structure of the way we operate in society was adapted in the context of an unprecedented lockdown period.

In this collective battle, digital tools were also mobilized.

As early as March 2020, South Korea, Singapore and China announced the creation of contact tracing mobile applications to support their health policies [2].

Also in March, in Europe, Switzerland reported that it was working on the creation of the “SwissCovid” application, in partnership with EPFL University in Lausanne and ETH University in Zurich. This contact tracing application pilot project was eventually implemented on June 25. SwissCovid is designed to notify users who have been in extended contact with someone who tested positive for the virus, in order to control the spread of the virus. To quote the proponents of the application, it is “based on voluntary registration and subject to approval from Swiss Parliament.” Another noteworthy feature is that it is “based on a decentralized approach and relies on application programming interfaces (APIs) from Google and Apple.”

France, after initially dismissing this type of technological component through the Minister of the Interior, who stated that it was “foreign to French culture,” eventually changed is position and created a working group to develop a similar app called “StopCovid”.

In total, no less than 33 contact tracing apps were introduced around the world [3]. With a questionable success.

However, many voices in France, Europe and around the world, have spoken out against the implementation of this type of system, which could seriously infringe on basic rights and freedoms, especially regarding individual privacy and freedom of movement. Others have voiced concern about the possible control of this personal data by the GAFAM or States that are not committed to democratic values.

The security systems for these applications have also been widely debated and disputed, especially the risks of facing a digital virus, in addition to a biological one, due to the rushed creation of these tools.

The President of the CNIL (National Commission for Information Technology and Civil Liberties) Marie-Laure Denis, echoed the key areas for vigilance aimed at limiting potential intrusive nature of these tools.

  • First, through an opinion issued on April 24, 2020 on the principle of implementing such an application, the CNIL stated that, given the exceptional circumstances involved in managing the health crisis, it considered the implementation of StopCovid feasible. However, the Commission expressed two reservations: the application should serve the strategy of the end-of-lockdown plan and be designed in a way that protects users’ privacy [4].
  • Then, in its opinion of May 25, 2020, urgently issued for a draft decree related to the StopCovid mobile app [5], the CNIL stated that the application “can be legally deployed as soon as it is found to be a tool that supports manual health investigations and enables faster alerts in the event of contact cases with those infected with the virus, including unknown contacts.” Nevertheless, it considered that “the real usefulness of the device will need to be more specifically studied after its launch. The duration of its implementation must be dependent on the results of this regular assessment.”

From another point of view, there were those who emphasized the importance of digital solutions in limiting the spread of the virus.

No application can heal or stop Covid. Only medicine and a possible vaccine can do this. However, digital technology can certainly contribute to health policy in many ways, and it seems perfectly reasonable that the implementation of contact tracing applications came to the forefront.

What we wish to highlight here is not so much the arguments for or against the design choices in the various applications (centralized or decentralized, sovereign or otherwise) or even against their very existence (with, in each case, questionable and justified points), but the conversational scope that has played a part in all the debates surrounding their implementation.

While our technological progress is impressive in terms of scientific and engineering accomplishments, our capacity to collectively understand interactions between digital progress and our world has always raised questions within the Values and Policies of Personal Information research Chair.

It is, in fact, the very purpose of its existence and the reason why we share these issues with you.

In the midst of urgent action taken on all levels to contain, manage and–we hope–reverse the course of the pandemic, the issue of contact tracing apps has caused us to realize that the debates surrounding digital technology have perhaps finally moved on to a tangible stage involving collective reflection that is more intelligent, democratic and respectful of others.

In Europe, and also in other countries in the world, certain issues have now become part of our shared basis for conversation. These include personal data protection, individual privacy, technology that should be used, the type of data collected and its anonymization, application security, transparency, the availability of their source codes, their operating costs, whether or not to centralize data, their relationship with private or State monopolies, the need in duly justified cases for the digital tracking of populations, independence from the GAFAM [6] and United States [7] (or other third State).

In this respect, given the altogether recent nature of this situation, and our relationship with technological progress, which is no longer deified nor vilified, nor even a fantasy from an imaginary world that is obscure for many, we have progressed. Digital technology truly belongs to us. We have collectively made it ours, moving beyond both blissful techno-solutionism and irrational technophobia.

If you are not yet familiar with this specific subject, please reread the Chair’s posts on Twitter dating back to the start of the pandemic, in which we took time to identify all the elements in this conversational scope pertaining to contact tracing

The goal is not to reflect on these elements as a whole, or the tone of some of the colorful and theatrical remarks, but rather something we see as new: the quality and wealth of these remarks and their integration in a truly collective, rational and constructive debate.

It was about time!

On August 26, 2020, French Prime Minister Jean Castex made the following statement: “StopCovid did not achieve the desired results, perhaps due to a lack of communication. At the same time, we knew in advance that conducting the first full-scale trial run of this type of tool in the context of this epidemic would be particularly difficult.” [8] Given the human and financial investment, it is clear that the cost-effectiveness ratio does not help the case for StopCovid (and similar applications in other countries) [9].

Further revelations followed when the CNIL released its quarterly opinion on September 14, 2020. While, for the most part, the measures implemented (SI-DEP and Contact Covid data files, the StopCovid application) protected personal data, the Commission identified certain poor practices. It contacted the relevant agencies to ensure they would become compliant in these areas as soon as possible.

In any case, the conclusive outcome that can be factually demonstrated, is that remarkable progress has been made in our collective level of discussion, our scope for conversation in the area of digital technology. We are asking (ourselves) the right questions. Together, we are setting the terms for our objectives: what we can allow, and what we must certainly not allow.

This applies to ethical, legal and technical aspects.

It’s therefore political.

Claire Levallois-Barth and Ivan Meseguer
Co-founders of the Values and Policies of Personal Information research chair

 

 

5G

5G: what it is? How does it work?

Xavier Lagrange, Professor of network systems, IMT Atlantique – Institut Mines-Télécom

[divider style=”normal” top=”20″ bottom=”20″]

5G is the fifth generation of standards for mobile networks. Although this technology has fueled many societal debates on its environmental impact, possible health effects, and usefulness, here we will focus on the technological aspects.

How does 5G work? Is it a true technological disruption or simple an improvement on past generations?

Back to the Past

Before examining 5G in detail, let’s take a moment to consider the previous generations. The first (1G) was introduced in the 1980s and, unlike the following generations, was an analogue system. The primary application was car telephones.

2G was introduced in 1992, with the transition to a digital system, and telephones that could make calls and send short messages (SMS). This generation also enabled the first very low speed data transmission, at the speed of the first modems for internet access.

2000 to 2010 was the 3G era. The main improvement was faster data transmission, reaching a rate of a few megabits per second with 3G⁺, allowing for smoother internet browsing. This era also brought the arrival of touch screens, causing data traffic and use of the networks to skyrocket.

Then, from 2010 until today, we transitioned to 4G, with much faster speeds of 10 megabits per second, enabling access to streaming videos.

Faster and Faster

Now we have 5G. With the same primary goal of accelerating data transmission, we should be able to reach an average speed of 100 megabits per second, with peaks at a few gigabits per second under ideal circumstances (10 to 100 times faster than 4G).

This is not a major technological disruption, but an improvement on the former generation. The technology is based on the same principle as 4G: the same waveforms will be used, and the same principle of transmission. It is called OFDM and enables parallel transmission: through mathematical processing, it can perform a large quantity of transmissions on nearby frequencies. It is therefore possible to transmit more information at once. With 4G, we were limited to 1,200 parallel transmissions. With 5G, we will reach 3,300 with a greater speed for each transmission.

Initially, 5G will complement 4G: a smartphone will be connected to 4G and transmission with 5G will only occur if a high speed is necessary and, of course, if 5G coverage is available in the area.

A more flexible network

The future network will be configurable, and therefore more flexible. Before, dedicated hardware was used to operate the networks. For example, location databases needed to contact a mobile subscriber were manufactured by telecom equipment manufacturers.

In the long term, the 5G network will make much greater use of computer virtualization technologies: the location database will be a little like an extremely secure web server that can run on one or several PCs. The same will be true for the various controllers which will guarantee proper data routing when the subscribers move to different areas in the network. The advantage is that the operator will be able to restart the virtual machines, for example in order to adapt to increased demand from users in certain areas or at certain times and, on the other hand, reduce the capacity if there is lower demand.

It will therefore be possible to reconfigure a network when there is a light load (at night for example) by combining controllers and databases in a limited number of control units, thus saving energy.

New antennas

As we have seen, 5G technology is not very different from the previous technology. It would even have been possible to develop using the same frequencies used for 3G networks.

The operators and government agencies that allocate frequencies chose to use other frequencies. This choice serves several purposes: it satisfies an ever-growing demand for speed and does not penalize users who would like to continue using older generations. Accommodating the increase in traffic requires the Hertzian spectrum (i.e., frequencies) dedicated to mobile networks to be increased. This is only possible with higher frequency ranges: 3.3 GHz, coming very soon, and likely 26 GHz in the future.

Finally, bringing new technology into operation requires a test and fine-tuning phase before the commercial launch. Transitioning to 5G on a band currently used for other technology would significantly reduce the quality perceived by users (temporarily for owners of 5G telephones, definitively for others) and create many dissatisfied customers.

There is no need to increase the number of antenna sites in order to emit new frequencies, but new antennas must be added to existing posts. These posts host a large number of small antennas and, thanks to signal processing algorithms, have a more directive coverage that can be closely controlled. The benefit is more efficient transmission in terms of speed and energy.

For a better understanding, we could use the analogy of flashlights and laser pointers. The flashlight, representing the old antennas, sends out light in all directions in a diffuse manner,  consuming a large amount of electricity and lighting a relatively short distance. The laser, on the contrary, consumes less energy to focus the light farther away, but in a very narrow line. Regardless of the antenna technology, the maximum power of the electromagnetic field produced in any direction will not be allowed to exceed maximum values for health reasons.

So, if these new antennas consume less energy, is 5G more energy efficient? We might think so since each transmission of information will consume less energy. Unfortunately, with the increasing number of exchanges, it will consume even more energy overall. Furthermore, the use of new frequencies will necessarily lead to an increase in the electric consumption of the operators.

New applications

When new technology is launched on the market, it is hard to predict all its applications. They often appear later and are caused by other stakeholders. That said, we can already imagine several possibilities.

5G will allow for a much lower latency between the sending and receiving of data. Take the example of a surgeon operating remotely with a mechanical arm. When the robot touches a part of the body, the operator will be almost instantly (a few ms for a distance of a few kilometers) be able to “feel” the resistance of what he is touching and react accordingly, as if he were operating with his own hands. Low latency is also useful for autonomous cars and remote-controlled vehicles.

For industrialists, we could imagine connected and automated factories in which numerous machines communicate together and with a global network.

5G is also one of the technologies that will enable the development of the internet of things. A city equipped with sensors can better manage a variety of aspects, including public lighting, the flow of vehicles, and garbage collection. Electricity can also be better controlled, with the real-time adaptation of consumption to production through several small, interconnected units called a smart grid.

For the general public, the network’s increased speed will allow them to download any file faster and stream premium quality videos or watch them live.

[divider style=”dotted” top=”20″ bottom=”20″]

Xavier Lagrange, Professor of network systems, IMT Atlantique – Institut Mines-Télécom

This article from The Conversation is republished here under a Creative Commons license. Read the original article (in French).

 

Photographie d'un train Regio2n, même modèle que le démonstrateur en résine thermoplastique développé par le projet Destiny

Trains made with recyclable parts

The Destiny project proposes a new process to manufacture parts for the railway and aeronautical industries. It uses a thermoplastic resin, which enables the materials to be recycled while limiting the pollution associated with manufacturing them.   

 

It is increasingly critical to be able to recycle products so as to lower the environmental cost of their production. The composite parts used in the railway sector have a service life of roughly 30 years and it is difficult and expensive to recycle them. They are mostly made from thermosetting resins — meaning they harden as the result of a chemical reaction that starts during the molding process. Once they have reached a solid state, they cannot be melted again. This means that if the parts cannot be repaired, they are destroyed.

The Destiny project brings together several industrial and academic partners[1] to respond to this need. “The goal is to be able to migrate towards recyclable materials in the railway and aeronautical industries,” says David Cnockaert, head of the project at Stratiforme Industries, a company that specializes in composite materials. Destiny won an Innovation Award at JEC World 2020 for two demonstrators made from recyclable composite materials: a regional train cabin and a railway access door.

A resin that can be melted

“An easy solution would be to use metal, which is easy to recycle,” says David Cnockaert “but we also have to take into account the requirements for this sector in terms of mass, design, thermal and acoustic aspects.” The purpose of the Destiny project is to develop a solution that can easily be tailored to current products by improving their environmental qualities . The materials used for reference parts in the railway industry are composites, made with a resin and fiber glass or carbon fiber. During the stratification stage, these fibers are impregnated with resin to form composite materials.

“In the Destiny project, we’re developing thermoplastic resins to create these parts,” says Eric Lafranche, a researcher at IMT Lille Douai who is involved in the Destiny project. Unlike thermosetting resins, thermoplastic resins develop plasticity at very high temperatures, and change from a solid to a viscous state. This means that if a train part is too damaged to be repaired, it can be reprocessed so that the recyclates can be reused.

The resin is produced by Arkema in a liquid form, with very low viscosity. “A consistency close to that of water is required to impregnate the fiberglass or carbon fibers during polymerization,” explains Eric Lafranche. “Polymerization takes place directly in the mold and this process allows us to avoid using certain components, namely those that release volatile organic compounds (VOC),” he adds. The production of VOC is therefore greatly limited in comparison with other resins. “People who work in proximity to these VOCs have protective equipment but they are still a source of pollution, so it’s better to be able to limit them,” says Eric Lafranche.

Read more on I’MTech: What is a Volatile Organic Compound (VOC)?

Tailored innovation

This thermoplastic resin provides properties that are virtually equivalent to thermosetting resins, “or even better resilience to shocks,” adds the researcher. In theory, this resin can be recycled infinitely. “In practice, it’s a bit more complicated – it can lose certain properties after being recycled repeatedly,” admits the researcher. “But these are minimal losses and we can mix this recycled material with pure material to ensure equivalent properties,” he explains.

The aim of the project is to be able to offer manufacturers recyclable materials while limiting the pollution associated with their production, but to do so by offering parts that are interchangeable with current ones. “The entire purpose of the project is to provide manufacturers with a solution that is easily accessible, which may therefore be easily tailored to current production lines,” says David Cnockaert. This means that the recyclable parts must comply with the same specifications as their thermosetting counterparts in order to be installed. This solution could also be adapted to other industries in the future. “We could consider applications in the energy, defense or medical industries, for example, for which we also manufacture composite parts,” concludes David Cnockaert.

 

Tiphaine Claveau for I’MTech

[1] Accredited by the i-TRANS and Aerospace Valley competitiveness clusters, the Destiny FUI project brings together Stratiforme Industries, STELIA Composite, ASMA, CANOE, Crépim, ARKEMA and an ARMINES/IMT Lille Douai research team.