Prix, Awards

Three new winners of the IMT-Académie des Sciences awards

The IMT-Académie des Sciences Awards, established in 2017, honor outstanding European scientific contributions to the fields of digital technology, energy and the environment. The 2018 formal awards ceremony was held on Tuesday, November 20th in the dome of the Institut de France. The three award winners (two “Grand Prix” winners and one “Young Scientist” winner) answered our questions.

 

Awards Ceremony at the Institut de France, 20 November 2018

 

Two winners share the IMT-Académie des Sciences Grand Prix

For this second edition, the jury selected two candidates for the IMT-Académie des Sciences Grand Prix: Ange Nzihou, director of the RAPSODEE-CNRS laboratory at IMT Mines-Albi and Pierre Comon, CNRS research director, Grenoble INP.

– Ange NZIHOU, 52, is a pioneer in waste recovery. His research has led to the development of new energy production mechanisms using industrial, urban and biomass waste. Ange Nzihou has greatly contributed to creating an international scientific community on this subject.

– Pierre COMON first made his mark in the field of signal processing. He significantly contributed to laying the foundations of signal source separation techniques. He then focused his work on the factorization of mathematical matrices, which have applications in imaging and chemistry for detecting toxic molecules.

The IMT-Académie des Sciences Young Scientist Prize

The IMT-Académie des Sciences Young Scientist Prize was awarded to Ioan-Mihai Miron, research director at CNRS-CEA

Ioan-Mihai MIRON, 38, is a global expert in spintronics—a field that seeks to study and control the electromagnetic properties of electrons. His work has applications in the development of new technology for data storage that is both more powerful and more energy efficient.

 

Mihai Miron

Ioan-Mihai Miron: Magnetism and Memory

Ioan Mihai Miron’s research in spintronics focuses on new magnetic systems for storing information. The research carried out at Spintec laboratory in Grenoble is still young, having begun in 2011. However, it already represents major potential in addressing the current limits facing technology in terms of our computers’ memory. The research also offers a solution to problems experienced by magnetic memories until now, which have prevented their industrial development. Ioan-Mihai Miron received the 2018 IMT-Académie des sciences Young Scientist Award for his groundbreaking and promising research. 

 

Ioan-Mihai Miron’s research is a matter of memory… and a little architecture too. When presenting his work on the design of new nanostructures for storing information, the researcher from Spintec* uses a three-level pyramid diagram. The base represents broad and robust mass memory. Its large size enables it to store large amounts of information, but it is difficult to access. The second level is the central memory, which is not as big but faster to access. It includes the information required to launch programs. Finally, the top of the pyramid is cache memory, which is much smaller but more easily accessible. “The processor only works with this cache memory,” the researcher explains. “The rest of the computer system is there to retrieve information lower down in the pyramid as fast as possible and bring it up to the top.

Of course, computers do not actually contain pyramids. In microelectronics, this memory architecture takes the form of thousands of microscopic transistors that are responsible for the central and cache memories. They work as switches, storing the information in binary format and either letting the current circulate or blocking it. With the commercial demand for miniaturization, transistors have gradually reached their limit. “The smaller the transistor, the greater the stand-by consumption,” Ioan-Mihai Miron explains. This is why the goal is now for the types of memory located at the top of the pyramid to rely on new technologies based on storing information at the electronic level. By modifying the current sent into magnetic material, the magnetization can be altered at certain points. “The material’s electrical resistance will be different based on this magnetization, meaning information is being stored,” Ioan-Mihai Miron explains. In simpler terms, a high electrical resistance corresponds to one value, a low resistance to another, which forms a binary system.

In practical terms, information is written in these magnetic materials by sending two perpendicular currents, one from above and one from below the material. The point of intersection is where the magnetization is modified. While this principle is not new, it still is not currently used for cache memory in commercial products. Pairing magnetic technologies with this type of data storage has remained a major industrial challenge for almost 20 years. “Memory capacities are still too low in comparison with transistors, and miniaturizing the system is complicated,” the researcher explains. These two disadvantages are not offset by the energy savings that the technology offers.

To compensate for these limitations, the scientific community has developed a simplified geometry of these magnetic architectures. “Rather than intersecting two currents, a new approach has been to only send a single linear path of current into the material,” Ioan-Mihai Miron explains. “But while this technique solved the miniaturization and memory capacity problems, it created others.” In particular, writing the information involves applying a strong electric current that could damage the element where the information is stored. “As a result, the writing speed is not sufficient. At 5 nanoseconds, it is slower than the latest generations of transistor-based memory technology.

Electrical geometry

In the early 2010s, Ioan-Mihai Miron’s research opened major prospects for solving all these problems. By slightly modifying the geometry of the magnetic structures, he demonstrated the possibility of writing at speeds in under a nanosecond. And the same size offers a greater memory capacity. The principle is based on the use of a current sent into a plane that is parallel to the layers of the magnetized material, whereas previously the current had been perpendicular. This difference makes the change in magnetization faster and more precise. The technology developed by Ioan-Mihai Miron offers still more benefits: less wear on the elements and the elimination of writing errors. It is called SOT-MRAM, for Spin-Orbit Torque Magnetic Random Access Memory. This technical name reflects the complexity of the effects at work in the layers of electrons of the magnetic materials exposed to the interactions of the electrical currents.

The nanostructures developed by Ioan-Mihai Miron and his team are opening new prospects for magnetic memories.

 

The progressive developments of magnetic memories may appear minimal. At first glance, a transition from two perpendicular currents to one linear current to save a few nanoseconds seems to be only a minor advance. However, the resulting changes in performance offer considerable opportunities for industrial actors. “SOT-MRAM has only been in existence since 2011, yet all the major microelectronics businesses already have R&D programs on this technology that is fresh out of the laboratory,” says Ioan-Mihai Miron. SOT-MRAM is perceived as the technology that is able to bring magnetic technologies to the cache memory playing field.

The winner of the 2018 IMT – Académie des Sciences 2018 Young Scientist award seeks to remain realistic regarding the industrial sector’s expectations for SOT-MRAM. “Transistor-based memories are continuing to improve at the same time and have recently made significant progress,” he notes. Not to mention that these technologies have been mature for decades, whereas SOT-MRAM has not yet passed the ten-year milestone of research and sophistication. According to Ioan-Mihai Miron, this technology should not be seen as a total break with previous technology, but as an alternative that is gradually gaining ground, albeit rapidly and with significant competitive opportunities.

But there are still steps to be made to optimize SOT-MRAM and have it integrated into our computer products. These steps may take a few years. In the meantime, Ioan-Mihai Miron is continuing his research on memory architectures, while increasingly entrusting SOT-MRAM to those who are best suited to transferring it to society. “I prefer to look elsewhere rather than working to improve this technology. What interests me is discovering new capacities for storing information, and these discoveries happen a bit by chance. I therefore want to try other things to see what happens.

*Spintec is a mixed research unit of CNRS, CEA, Université Grenoble Alpes.

[author title=”Ioan-Mihai Miron: a young expert in memory technology” image=”https://imtech-test.imt.fr/wp-content/uploads/2018/11/mihai.png”]

Ioan-Mihai Miron is a researcher at the Spintec laboratory in Grenoble. His major contribution involves the discovery of the reversal of magnetization caused by spin orbit coupling. This possibility provides significant potential for reducing energy consumption and increasing the reliability of MRAM, a new type of non-volatile memory that is compatible with the rapid development of the latest computing processors. This new memory should eventually come to replace SRAM memories alongside processors.

Ioan-Mihai Miron is considered a world expert, as shown by the numerous citations of his publications (over 3,000 citations in a very short period of time). In 2014 he was awarded the ERC Starting Grant. His research has also led to several patents and contributed to creating the company Antaios, which won the Grand Prix in the I-Lab innovative company creation competition in 2016. Fundraising is currently underway, demonstrating the economic and industrial impacts of the work carried out by the winner of the 2018 IMT-Académie des Sciences Young Scientist award.[/author]

Pierre Comon

Pierre Comon: from the brain to space, searching for a single solution

Pierre Comon’s research focuses on a subject that is as simple as it is complex: how to find a single solution to a problem. From environment to health and telecommunications, this researcher in information science at GIPSA-Lab is taking on a wide range of issues. Winner of the IMT-Académie des Sciences 2018 Grand Prix, he juggles mathematical abstraction and the practical, scientific reality in the field.

 

When asked to explain what a tensor is, Pierre Comon gives two answers. The first is academic, factual, and rather unattractive despite a hint of vulgarization: “it is a mathematical object that is equivalent to a polynomial with several variables.” The second answer reveals a researcher conscious of the abstract nature of his work, passionate about explaining it and experienced at doing so. “If I want to determine the concentration of a molecule in a sample, or the exact position of a satellite in space, I need a single solution to my mathematical problem. I do not want several possible positions of my satellite or several concentration values, I only want one. Tensors allow me to achieve this.

Tensors are particularly powerful in conditions in which the number of parameters is not particularly high. For example, they cannot be used to find the unknown position of 100 satellites with only 2 antennas. However, when the ratio between the parameters to be determined and the data samples are balanced, they become a very useful tool. There are many applications for tensors, including telecommunications, environment and healthcare.

Pierre Comon recently worked on tensor methods for medical imaging at the GIPSA-Lab* in Grenoble. For patients with epilepsy, one of the major problems is determining the source of the seizures in the brain. This not only makes it possible to treat the disease, but also to potentially prepare for surgery. “When patients have a disease that is too resistant, it is sometimes necessary to perform an ablation,” the researcher explains.

Today, these points are localized using invasive methods: probes are introduced into the patient’s skull to record brainwaves, a stage that is particularly difficult for patients. The goal is therefore to find a way to determine the same parameters using non-invasive techniques, such as electroencephalography and magnetoencephalography. Tensor tools are integrated into the algorithms used to process the brain signals recorded through these methods. “We have obtained promising results,” explains Pierre Comon. Although he admits that invasive methods currently remain more efficient, he also points out that they are older. Research on this topic is still young but has already provided reason to hope that treating certain brain diseases could become less burdensome for patients.

An entire world in one pixel

For environmental applications, on the other hand, results are much less prospective. Over the past decade, Pierre Comon has demonstrated the relevance of using tensors in planetary imaging. In satellite remote sensing, each pixel can cover anywhere from a few square meters to several square kilometers. The elements present in each pixel are therefore very diverse: forests, ice, bodies of water, limestone or granite formations, roads, farm fields, etc. Detecting these different elements can be difficult depending on the resolution. Yet, there is a clear benefit in the ability to automatically determine the number of elements within one pixel. Is it just a forest? Is there a lake or road that runs through this forest? What is the rock type?

The tensor approach answers these questions. It makes it possible to break down pixels by indicating the number of the different components. Better still, it can do this “without using a dictionary, in other words, without knowing ahead of time what elements might be in the pixel,” the researcher explains. This possibility owes to an intrinsic property of tensors, which Pierre Comon has brought to light. In certain mathematical conditions, they can only be broken down one way. In practice, for satellite imaging, a minimum number of variables are required: the intensity received for each pixel, each wavelength and each angle of incidence must be known. Therefore, the unique nature of tensor decomposition makes it possible to retrace the exact proportion of different elements in each image pixel.

For planet Earth, this approach has limited benefits, since the various elements are already well known. However, it could be particularly helpful in monitoring how forests or water supplies develop. On the other hand, the tensor approach is especially useful for other planets in the solar system. “We have tested our algorithms on images of Mars,” says Pierre Comon. “They helped us to detect different types of ice.” For planets that are still very distant and not as well known, the advantage of this “dictionary free” approach is that it helps bring unknown geological features to light. Whereas the human mind tends to compare what it sees with something it is familiar with, the tensor approach offers a neutral description and can help reveal structures with unknown geochemical properties.

The common theme: a single solution

Throughout his career, Pierre Comon has sought to understand how a single solution can be found for mathematical problems. His first major research in this area began in 1989 and focused on blind source separation in telecommunications. How could the mixed signals from two transmitting antennas be separated without knowing where they were located? “Already at that point, it was a matter of finding a single solution,” the researcher recalls. This research led him to develop techniques for analyzing signals and decomposing them into independent parts to determine the source of each one.

The results he proposed in this context during the 1990s had a huge resonance in both the academic world and industry. In 1988, he joined Thales and developed several patents used to analyze satellite signals. His pioneer article on the analysis of independent components has been cited by fellow researchers thousands of times and continues to be used by scientists. According to Pierre Comon, this work formed the foundation for his research topic. “My results at the time allowed us to understand the conditions for the uniqueness of a solution but did not always provide the solution. That required something else.” That “something else” is in part the tensors, which he has demonstrated to be valuable in finding single solutions.

His projects now focus on increasing the number of practical applications of his research. Beyond the environment, telecommunications and brain imaging, his work also involves chemistry and public health. “One of the ideas I am currently very committed to is that of developing an affordable device for quickly determining the levels of toxic molecules in urine,” he explains. This type of device would quickly reveal polycyclic aromatic hydrocarbon contaminations—a category of harmful compounds found in paints. Here again, Pierre Comon must determine certain parameters in order to identify the concentration of pollutants.

*The GIPSA-Lab is a joint research unit of CNRS, Université Grenoble Alpes and Grenoble INP.

[author title=”Pierre Comon: the mathematics of practical problems” image=”https://imtech-test.imt.fr/wp-content/uploads/2018/11/pierre-comon.jpg”]Pierre Comon is known in the international scientific community for his major contributions to signal processing. He became interested in exploring higher order statistics for separating sources very early on, establishing foundational theories for analyzing independent components, which has now become one of the standard tools used for the statistical processing of data. His significant contribution recently included his very original results on tensor factorization.

The applications of Pierre Comon’s contributions are very diverse and include telecommunications, sensor networks, health and environment. All these areas demonstrate the scope and impact of his work. His long industrial history, strong desire for his scientific approach to be grounded in practical problems and his great care in developing algorithms for implementing the obtained results all further demonstrate how strongly Pierre Comon’s qualities resonate with the criteria for the 2018 IMT-Académie des Sciences Grand Prix.[/author]

 

Yelda et OSO-AI Yelda and OSO-AI

Yelda and OSO-AI: new start-ups receive honor loans

On December 6, the Committee for the Digital Fund of the Graduate Schools and Universities Initiative chose two start-ups to receive honor loans: Yelda and OSO-AI. Together, Yelda, a start-up from the incubator IMT Starter, and OSO-AI, from the incubator at IMT Atlantique will receive three honor loans, for a total of €80,000.

These interest-free loans aimed at boosting the development of promising young companies are co-financed by the Fondation Mines-Télécom, the Caisse des Dépôts and Revital’Emploi. This initiative has supported over 84 startups since 2012.

 

[box type=”shadow” align=”” class=”” width=””]

Yelda is developing the first vocal assistant for companies. The start-up’s team—composed of experts in bots, automatic natural language processing, voice management and machine learning—is convinced that chat and vocal interactions will soon replace traditional interfaces. This will revolutionize the way users interact with companies, for both customers and employees. Yelda, a start-up from the incubator IMT Starter, received an honor loan of €40,000. Find out more [/box]

[box type=”shadow” align=”” class=”” width=””]

OSO-AI is already improving quality of life for the hearing impaired. The start-up will soon become the partner of reference in Artificial Intelligence for hearing aids and will invent Augmented Auditory Reality. The start-up, incubated at IMT Atlantique, received an honor loan of €30,000 and another of €10,000. Find out more [/box]

 

migrants

How has digital technology changed migrants’ lives?

Over the past few decades, migrants have become increasingly connected, as have societies in both their home and host countries. The use of new technologies allows them to maintain ties with their home countries while helping them integrate in their new countries. They also play an important role in the process of migration itself. Dana Diminescu, a sociologist at Télécom ParisTech, is exploring this link between migration and digital technology and challenging the traditional image of the uprooted migrant. She explains how new uses have changed migratory processes and migrants’ lives.

 

When did the link between migration and digital technology first appear?

Dana Diminescu: The link really became clear during the migration crisis of 2015. Media coverage highlighted the migrants’ use of smartphones and the public discovered the role telephones play in the migration process. A sort of “technophoria” appeared for refugees. This led to a great number of hackathons being organized to make applications to help immigrants, with varying degrees of success. In reality, the migrants were already connected well before the media hype of 2015. In 2003, I’d already written an epistemological manifesto on the figure of the connected migrant, based on observations dating from the late 1990s.

In 1990, smartphones didn’t exist yet; how were migrants ‘connected’ at that time?

DD: My earliest observation was the use of a mobile phone by a collective of migrants living in a squat. For them, the telephone was a real revolution and an invaluable resource. They used it to develop a network and find contacts. This helped them find jobs and housing, in short, it helped them integrate society. Two years later, those who had been living in the squat had got off the street and the mobile phone played a large role in making this possible.

What has replaced this mobile phone today?

DD: Social media play a very strong role in supporting integration for all migrants, regardless of their home country or cultural capital. One of the first things they do when they get to their country of destination is to use Facebook to find contacts. WhatsApp is also widely used to develop networks. And YouTube helps them learn languages and professional skills.

Dana Diminescu has been studying the link between migrants and new technologies since the late 1990s.

Are digital tools only useful in terms of helping migrants integrate?

DD: No, that’s not all – they also have an immediate, performance-related effect on the migratory process itself. In other words, an individual’s action on social media can lead to almost instantaneous effect on migration movement. A message posted by a migrant showing that he was able to successfully cross the border at a certain place on the Balkan route creates movement. The other migrants will adjust their journey that same day. That’s why we now talk about migration traceability rather than migration movement. Each migrant uses and leaves behind a record of his or her journey. These are the records used in sociology to understand migrants’ choices and actions.

Does the importance of digital technology in migration activity challenge the traditional image of the migrant?

DD: For a long time, humanities research focused on the figure of the uprooted migrant. In this perception of migrants, they are at once absent from their home country, and absent from their destination country since they find it difficult to fit in completely. New technologies have had an impact on this view, because they have made these forms of presence more prominent. Today, migrants can use tools like Skype to see their family and loved ones from afar and instantly. In interviews, migrants tell me, “I don’t have anything to tell them when I go back to see them since I’ve already told them everything on Skype.” As for presence in their destination countries, digital tools play an increasingly important role in access, whether for biometric passports or cards to access work, transport etc. For migrants, the use of these different tools makes their way of life very different to the way it would have been a few years ago, when such access had not yet been digitized. It is now easier for them to exercise their rights.

Does this have an effect on the notion of borders?

DD: Geographical borders don’t have the same meaning they used to. As one migrant explained in his account one day, “They looked for me on the screen, they didn’t find me, I got through.” Borders are now based on our personal data: they’re connected to our date of birth, digital identities, locations. These are the borders migrants have to get around today. That’s why their telephones are confiscated by smugglers so that they can’t be traced, or why they don’t bring their smartphones with them, so that border police won’t be able to force them to open Facebook.

So digital technology can represent a hurdle for migrants?

DD: Since the migrants are connected, they can, of course, be traced. This limiting aspect of digital technology also exists in the uses of new technology in destination countries. Technology has increased the burden of the informal contract between those who leave and those who stay behind. Families expect migrants to be very present. They expect individuals to be available for them at the times they’re used to spending in their company. In interviews, migrants say that it’s a bit like a second job. They don’t want to appear as if they have broken away, they have to check in. At times, this leads to migrants’ lying, saying that they’ve lost their mobile phone or that they don’t have internet access, to free themselves from the burden of this informal contract. In this case, digital technology is seen as a constraint, and at times it can even be highly detrimental to social well-being.

In what sort of situations is this the case?

DD: In refugee camps, we’ve observed practices that cut migrants off from social ties. In Jordan, for example, it’s impossible to send children to get food for their parents. Individuals must identify themselves with a biometric eye scanner and that’s the only way for them to receive their rations. If they can’t send their children, they can’t send their friends or neighbors either. There is a sort of destruction of the social fabric and support networks. Normal community relationships become impossible for these refugees. In a way, these technologies have given rise to new forms of discrimination.

Does this mean we must remain cautious?

DD: We must be wary of digital solutionism. We conducted a research project with Simplon on websites that provide assistance for migrants. A hundred sites were listed. We found that for the most part, the sites were either not usable or not finalized — and when they are, they’re rarely used. Migrants still prefer using social media over specific digital tools. For example, they would still rather learn a language with Google Translate than use a language learning application. They realize that they need certain things to facilitate their learning and integration process. It’s just that the tools that have been developed for these purposes aren’t effective. So we have to be cautious and acknowledge that there are limitations to digital technology. What could we delegate to a machine in the realm of hospitality? How many humans are there behind training programs and personal support organizations?

 

Audrey Francisco Bosson

Audrey Francisco-Bosson, particle tracker

Audrey Francisco-Bosson has just won a L’Oréal-UNESCO For Women in Science Scholarship. This well-deserved award is in recognition of the young researcher’s PhD work in fundamental physics, carried out at the Subatech laboratory at IMT Atlantique. By exploring the furthest depths of matter through the eyes of the ALICE detector of the Large Hadron Collider (LHC) at CERN, Audrey Francisco-Bosson tracks particles in order to better understand the mysterious quark-gluon plasma.

 

How can matter be reproduced to represent its state at the origin of the universe? 

Audrey Francisco-Bosson: At our level, all matter is made up of atoms, the nuclei of which are composed of protons and neutrons. Inside these protons and neutrons, there are quarks bound together by gluons responsible for what we call “strong interaction.” The Large Hadron Collider (LHC) at CERN allows us to break atoms apart in order to study this strong interaction. When heavy nuclei collide with one another, the energy released is enough to release these quarks. What we end up with is a state of matter in which the quarks and the gluons are no longer bound together: the quark-gluon plasma. This state corresponds to that of the universe a few micro-seconds after the Big Bang: the temperature is 100,000 times higher than that of the sun’s core

What do you look at in the plasma?

AFB: The plasma itself has a very short lifetime: over a billion times shorter than a nanosecond. We cannot observe it. We can, however, observe the particles that are produced in this plasma. When they cool down, the quarks and gluons which were released in the plasma join together to form new particles. We measure their energy, momentum, charge and mass in order to identify and characterize them. All of these aspects provide us with information about the plasma. Since there are lots of different particles, it’s important to specialize a bit. For my PhD thesis I concentrated on the J/ψ particle.

Audrey Francisco-Bosson, winner of a 2018 L’Oréal-Unesco For Women in Science Scholarship. Photo: Fondation L’Oréal/Carl Diner.

 

What is special about the J/ψ particle?

AFB: Researchers have been interested in it for a long time since it has been identified as a good probe for measuring the temperature of the plasma. It is composed of a pair of quarks, which break apart above a certain temperature. Researchers had historically suspected that by looking at whether or not the pair had split apart, it would be possible to derive the temperature of the quark-gluon plasma. In practice, it turned out to be a bit more complicated than that. But the J/ψ particle is still used as a probe for the plasma. For my purposes I used it to deduce information, but about its viscosity rather than temperature.

How do you use J/ψ to deduce the viscosity of the quark-gluon plasma?

AFB: It’s important to understand that there are huge pressure variations in the environment we’re observing. The particles do not all have the same characteristics, and importantly, they aren’t all the same weight. They are thus divided up according to the pressure difference. Since the J/ψ is quite heavy, observing how it moves allows us to observe the flow of the plasma. As in a river, objects won’t travel at the same speed depending on their weight. By combining these observations of J/ψ particles with those of other particles, we deduce the viscosity properties of the plasma. That’s how it was proved that the quark-gluon plasma doesn’t behave like a gas— as we had thought — but like an inviscid fluid.

Does your scientific community still have any big questions about the quark-gluon plasma that the J/ψ particle could help answer?

AFB: One of the big questions is finding out at what moment this fluid characteristic is reached. That means that we can use the laws of fluid mechanics, and those of hydrodynamics in particular, to describe it. More generally, all this research makes it possible to test the properties and laws of quantum chromodynamics. This theory describes the strong interaction that binds the quarks. By testing this theory, we can assess whether the model used to describe matter is correct.

You are going to start working at Yale University in the USA in the coming weeks. What kind of research will you be carrying out there?

AFB: I’ll be working on the results of the STAR detector, which is located at the heart of the RHIC collider. It’s similar to the LHC ALICE detector but with different collision energies. The two detectors are complementary, so they allow us to compare different results in order to study variations between one energy and another and deduce new information about the plasma. For my part, the idea will also be to analyze collision data, like I did with ALICE. I’ll also work on developing new sensors. It’s an important task for me since I studied physical engineering before beginning my PhD thesis. I like to really understand how a detector works before using it. That’s also why I worked on a new sensor for ALICE during my PhD thesis which will be installed on the detector in 2021.

 

Also read on I’MTech

atmosphere

What’s new in the atmosphere?

In conjunction with the 4th National Conference on Air Quality held in Montrouge on 9 and 10 October 2018, I’MTech sat down with François Mathé, a researcher in atmospheric sciences at IMT Lille Douai to ask him five questions. He gave us a glimpse of the major changes ahead in terms of measuring and monitoring air pollutants. Between revising the ATMO index and technical challenges, he explains the role scientists play in what is one of today’s major public health and environmental challenges.

 

The ATMO index, which represents in France air quality with a number ranging from 1 (very good) to 10 (very bad), is going to be revised. What is the purpose of this change?

François Mathé: The concept of an index to represent outdoor ambient air quality is that it is an indicator that provides a daily report on the state of the atmosphere in a clear, easily-accessible way for people who live in cities with over 100,000 residents. The ATMO index is based on measured concentrations of pollutants which are representative of their origins: ozone (O3), particulate matter (PM10), nitrogen dioxide (NO2), and sulfur dioxide (SO2). A sub-index is calculated for each of these chemical species, which is determined daily based on average pollution levels considered for specific stations — those which are representative of ambient pollution, or “background pollution”. The highest sub-index corresponds to the ATMO index. The higher the value, the lower the air quality. The problem is that this approach doesn’t take into account proximity phenomena such as vehicle or industrial emissions, or the cocktail effect — if the four pollutants all have a sub-index of 6, the ATMO index will be lower than if three of them have a sub-index of 1 and the fourth has a sub-index of 8. Yet the cocktail effect can have impacts on health, whether short or long-term. This is one of the reasons for the index revision planned in the near future, to better report on the state of the atmosphere, while updating the list of pollutants taken into consideration and making our national index consistent with those used by our European neighbors.

Why does the list of pollutants considered have to be updated?

FM: Sulfur dioxide (SO2) and carbon monoxide (CO) are chemical compounds that were in the spotlight for a long time. Although their toxicity is a real issue, these pollutants are now associated with very specific, clearly-defined situations, such as industrial sites or underground parking lots. At the national level, it is no longer appropriate to take them into account. Conversely, new species are emerging which are worth highlighting on a national scale. In June, the ANSES published a notice on non-regulated air pollutants that should be taken into consideration in monitoring air quality. The list includes pollutants such as 1.3-butadiene, ultrafine particles (UFP), soot carbon, and others. In France, we also have a very specific problem: plant protection products, i.e. pesticides. The ANSES has established a list of over 90 of these types of products which are currently being assessed through a year-long project covering the entire French territory. As a result, all of these ‘new pollutants’ require to re-examine how air quality is presented to citizens. In addition, we could mention pollens which are often ‘marginalized’ when it comes to monitoring air quality in France.

Against this backdrop of changing the way air quality is assessed and represented, what role do researchers play?

FM: Behind these notions of measuring, monitoring and representing air quality there are regulations, at both national and European level. And regulations imply technical standards and guidelines for the organizational aspect. That’s where the central laboratory for air quality monitoring (LCSQA) comes in. It serves as the central scientific reference body, bringing together IMT Lille Douai, Ineris, and LNE (the national laboratory of metrology and testing). By pooling different skills, this body acts as a foundation of expertise. It is responsible for tasks such as validating technical documents establishing methodologies to apply for pollutants measurement, setting requirements for using devices, verifying the technical compliance of the instruments themselves, etc. For example, we conduct tests on sensors in the laboratory and in real conditions to assess their performance and make sure that they are able to measure correctly, compared to reference instruments.

Where do the regulations and guidelines you use in your work come from?

FM: Mainly from European directives. The first regulations date back to the 1980s and the current texts in force, establishing thresholds which must not be exceeded and measuring techniques to be used, date from 2004, 2008 and 2015 respectively. The 2004 text specifically applies to the chemical composition of PM10, and in particular the concentration of specific heavy metals (arsenic, cadmium, nickel) and of organic compounds (benzo[a]pyrene as a tracer of polycyclic aromatic hydrocarbons). All other regulated gaseous and particulate pollutants are covered by the 2008 directive which was updated by the 2015 text. These regulations determine our actions, but as end users, we also have the opposite role: that of participating in the drafting and revision of standards texts at the European level. The LCSQA provides technical and scientific expertise concerning the application and evolution of these regulations. For example, we’re currently working hard to develop the technical guidelines which will be used for measuring pesticides. We also play a role in verifying the technical compliance of common instruments as well as innovative ones used to improve the performance of real-time measurement, which is essential to having access to a high enough quality of information to be able to take appropriate steps more quickly.

What does this challenge of improving measuring time represent?

FM: Air quality is one of people’s biggest concerns. There is no point in finding out today that we breathed poor quality air last night; the damage has already been done. Quicker information enables us to take preventive action earlier, and therefore be more effective in helping populations to manage the risk of exposure. It also allows us to take action more quickly to address the causes: regulating traffic, working with industry, citizen initiatives, etc. So it’s a big challenge. To rise to this challenge, real-time measurement — provided it is of sufficient quality — is our main focus. In our current system, for a certain number of the pollutants involved, the methodology is based on using an instrument to collect a sample, sending it to a laboratory for analysis, and reporting the results. The idea is to make this measurement chain as short as possible through direct, on-site analysis, with results reported at the time samples are collected, to the extent possible. This is where our role in qualifying devices is meaningful. These new systems have to produce results that meet our needs and are reliable, ideally approaching the level of quality of the current system.

 

[divider style=”normal” top=”20″ bottom=”20″]

A MOOC to learn all about air pollution

On October 8, IMT launched a MOOC dedicated to air quality, drawing on the expertise of IMT Lille Douai. It presents the main air pollutants and their origin, whether man-made or natural. The MOOC will also provide an overview of the health-related, environmental and economic impacts of air pollution.

[divider style=”normal” top=”20″ bottom=”20″]

internet

Left out of the digital revolution?

Dominique Pasquier, Télécom ParisTech, Institut Mines-Télécom (IMT)

This text is published as part of the “Digital Society” column written by researchers from the Economic and Social Sciences department at Télécom ParisTech, members of the Interdisciplinary Institute for Innovation (CNRS).

[divider style=”dotted” top=”20″ bottom=”20″]

[dropcap]S[/dropcap]ome revolutions are silent ones. Internet integration in working-class areas is one such example.

Where do we stand in terms of the “digital divide”?

In the early 2000s there was a lot of talk about the “digital divide” focusing on inequality in terms of both access and uses. Annual reports by CREDOC have shown that over the past ten years working-class categories have started to catch up in terms of internet access: in France, between 2006 and 2017, the proportion of employees with Internet access at home increased from 51% to 93%, while for blue collars it rose from 38% to 83% (CREDOC 2017 : 48).

Age, rather than income or level of education, is now the most determining factor (8 out of 10 of those who never use the internet are 60 or older). But these same reports show that while the divide is closing in terms of internet access, internet use among working and lower classes remains less varied and frequent than among the middle and upper classes. Individuals without college degrees find it harder to adjust to paperless administrative services, do less research, buy less online, and only very rarely produce content. In short, there appears to be a kind of “working classes’ internet” that is less creative, less daring and less useful to a certain extent.

Shifting the focus

Perhaps the question should be looked at from a different angle. These statistical surveys are based on tallying up and lamenting shortcomings in comparison to the most innovative internet practices, meaning those of young, educated urban populations. The issue could be examined from a different perspective by starting out with the idea that the internet practices favored by the working and lower classes make sense in light of their everyday needs and that they are important indicators for understanding their relationship with the world and possible transformations in this relationship.

As Jacques Rancière explained in his analysis of written productions of workers in the 19th century, it is a matter of setting the equality of intelligences as a starting point for discussing and understanding how “a common language appropriated by others” can be re-appropriated by those for whom it was not intended (Rancière 2009 : 152).

This shifted focus would make it possible to perceive uses that may not stand out as remarkable, but which have dramatically transformed the relationship with knowledge and learning among those with low levels of education. Examples of such uses include looking up words used by doctors or found in headings of children’s homework. For sophisticated internet users, these uses may seem rather basic, but in the meantime they bring about profound changes by making relationships with experts less asymmetrical and by reducing the phenomena of “deferential attitude” among the working classes that Annette Lareau analyzes in her excellent book, Unequal Childhoods (2011).

Online research: learning and shopping

Employees with lower-paying jobs who do not use digital technology for work also spend a great deal of time online to inform themselves about their profession or their rights: the success of websites for caregivers is a good example. Childcare workers discuss their views on bringing up children and hospital service workers talk about their relationships with patients. It is also important to note the ways in which tutorials have revived interest in expertise traditionally held by the working class: novel cooking ingredients, new gardening or DIY methods and never-seen-before knitting patterns have made their way into homes.

Working-class families thus use the internet to learn, but they also use it make purchases. For people who live in rural or semi-rural areas, being able to access goods that had previously been impossible to find nearby in just a few clicks would appear to represent a huge opportunity. But in reality, it is a bit more complicated. Online marketplaces are less appreciated for their choice as they are for the savings they provide through browsing for special offers. Looking for deals is the main source of motivation, along with the fact that this makes it possible to manage supplies by purchasing in batches. At the same time, these savings are guilty of playing a role in weakening local business – or what remains of local business anyway.

In communities where people know each other well, shopkeepers are also neighbors – and sometimes friends – and betrayal can leave both sides feeling bitter. On the other hand, marketplaces for secondhand goods, such as Le Bon Coin which has recruited a significant number of rural and working-class customers, are described as virtuous markets: they allow people to enjoy leisurely browsing by geographic location (this has become a new source of gossip!) and provide an alternative to throwing away used goods. These marketplaces also give people the opportunity to earn a little money and help those who buy goods maintain a sense of pride, since they are able to furnish their homes or buy clothes at a low cost without resorting to donation systems. Online shopping has therefore led to paradoxical changes in relationships with the local community, as it has destroyed certain ties but created others.

Using the internet for reading and communication

The internet can be described as a relationship with writing – the mark of those who have created it. This can be difficult for individuals with low levels of education who do not write much in their work. Email, which requires standardized writing, is largely underused in these working-class families: it is only used to communicate with e-commerce websites or administrative services, and is often related to a problem in the latter case.

This is also because email is based on an interpersonal, asynchronous communication rationale that goes against the standards of face-to-face relationships and group discussion that are so prevalent in working-class communities. Facebook has fared much better: it allows users to exchange content by sharing links, is in keeping with a group discussion system and does not require formal writing of any kind. This social network appears to be a good way to stay in touch with extended family and close friends, while seeking consensus on shared values. It is a self-segregating network which is not particularly open to other social worlds.

While the internet has kept many of its promises in terms of our relationship with knowledge, it clearly has not managed to break down the barriers between different social worlds.

[divider style=”dotted” top=”20″ bottom=”20″]

Dominique Pasquier, sociologist, research director at CNRS and author of L’Internet des familles modestes. Enquête dans la France rurale (The Internet of Low-Income Families. A Survey in Rural France) Paris, Presses des Mines, 2018.

Dominique Pasquier, sociologist, research director at CNRS, member of the Interdisciplinary Institute for Innovation (i3), Télécom ParisTech, Institut Mines-Télécom.

The original version of this article was published in French on The Conversation France.

Also read on I’MTech

IIoT

What is the Industrial Internet of Things (IIoT)?

Industry and civil society do not share the same expectations when it comes to connected objects. The Internet of Things (IoT) must therefore adapt to meet industrial demands. These specific adaptations have led to the emergence of a new field: IIoT, or the Industrial Internet of Things. Nicolas Montavont, a researcher at IMT Atlantique, describes the industrial stakes that justify the specific nature of the IIoT and the challenges currently facing the scientific community.

 

What does the IIoT look like in specific terms?

Nicolas Montavont: One of the easiest examples to present and understand is the way production lines are monitored. Sensors ensure that a product is manufactured under good conditions, by controlling what travels down the conveyor belt and by measuring the temperature, humidity or luminosity for the specific work environment. Actuators can then respond to the data received, for example by reconfiguring a production line based on the environment or context, allowing a machine to perform a different task.

How does the IIoT benefit companies?

NM: There are benefits in every area: production times, line performance, cost reduction, etc. One major benefit is increased flexibility thanks to a more autonomous system. Production lines can operate and adapt with fewer human interventions. Staff can therefore transition from a role of handling and management to supervision and control. This change especially benefits small businesses. Today, production is very focused on large volumes. Increased flexibility and autonomy let companies find more cost-effective ways of manufacturing small quantities.

What justifies referring to IIoT as a separate field, distinct from the mainstream IoT?

NM: Mainstream IoT technologies are not designed to meet industry requirements. In general, when IoT is used for applications, the performance requirements are not very high. Communicating objects are used to send non-critical data packets without strict time constraints. The opposite is true for industry demands, which require object networks that send important data with the lowest possible latency. Therefore, specific IoT standards must be developed for the industrial sector, hence the name IIoT. For example, companies do not want to be limited by propriety standards, and so they want to push the Internet to become the network that supports their architectures.

Why do companies have more performance constraints for their networks of communicating objects?

NM: One scenario that clearly represents industrial constraints is the one we selected for the SCHEIF project, in the context of the German-French Academy for the Industry of the Future (GFA). We initiated a collaboration with the Technische Universität of Munich (TUM) on the quality of the network and data in an industrial environment. We started with a scenario featuring a set of robots that move autonomously through a work environment. They can accomplish specific tasks and can also detect and adapt to environmental changes. For example, if a person walks through the area, they must not be hit by the robots. This scenario includes a major safety aspect, which demands an efficient network, low latency, good-quality data communications and the effective assessment of the state of the environment.

What are the scientific challenges of this type of example?

NM: First of all, locating robots indoors in real time presents a challenge. Technologies exist but are not yet perfect and do not offer sufficient performance levels. Secondly, we need to make sure the robots exchange the monitoring data in an appropriate manner, by prioritizing the information. The main problem is, “who needs to send what, and when?” We are working on how to schedule the communications and represent the robots’ knowledge of their environment. We also have energy consumption constraints, first in terms of hardware and the network. Finally, there is a significant cybersecurity aspect, which has become a major focus area for the scientific community.

digital simulation

What is digital simulation?

Digital simulation has become an almost mandatory step in developing new products. But what does “simulating behavior” or “modeling an event” really mean? Marius Preda, a computer science researcher at Télécom SudParis, explains what’s hiding behind these common industry expressions.

 

What is digital simulation used for?

Marius Preda: Its main goal is to reduce prototyping costs for manufacturers. Instead of testing a product with real prototypes, which are expensive, companies use fully digital twins of these prototypes. These virtual twins take the form of a 3D model that has all the same attributes as the real product– colors, dimensions, visual aspect– and most importantly, in which a great quantity of metadata is injected, such as physical properties of the materials. This makes it possible to create a simulation that is very close to reality. The obvious advantage is that if the product isn’t right, the metadata can simply be changed, or the attributes of the digital twin can be directly modified. With a real prototype, it would have to be entirely remade.

What can be simulated?

MP: The main focus is on production. Companies make simulations in order to accurately measure all the parameters of a part and obtain production specifications. That explains a high percentage of uses of digital simulation. After that, there are significant concerns about aging. Physical laws that determine how materials wear are well-known, so companies inject them into digital models in order to simulate how a part will wear as it is used. One of the new applications is predictive maintenance. Simulations can be used to predict breakage or faults in order to determine the optimal moment a part should be repaired or replaced. All that relates to products but there are also simulations of whole factories to simulate their operations, and simulations of the human body.

Read on IMTech:  A digital twin of the aorta to help prevent aneurysm rupture

How is a digital simulation created?

MP: The first step is defining the goal of the simulation. Taking a car, for example, if the goal is to study how the car body deforms during an impact, the modeling will be different from if the goal were to analyze visual and sound comfort inside the passenger compartment. So modeling is carried out based on what the aim is: automobile manufacturers don’t create a 3D model with the idea that they’ll be able to use it for all simulations. The 3D form may be the same, but what’s important are the physical properties that will be included within the model. For crash test simulations, properties related to the way materials deform are injected into the model in the form of equations that govern their behavior. For sound comfort, the laws of reflectivity and sound propagation are included.

What form do simulations take?

MP: Virtual reality is often presented as something new, but manufacturers have been using it for years for simulations! In the past, they would create 3D environments called “caves,” which where rooms in which different parts of a car – to continue with our automobile example – were projected on the walls. Today, virtual reality headsets make it possible to save space and put more people in the same virtual environment. But beyond this highly visual form of simulation, what industry professionals are really interested in is the model and the results behind it. What matters isn’t really seeing how a car deforms in an accident, but knowing by how many centimeters the engine block penetrates into the passenger compartment. And sometimes, there isn’t even a visual: the simulation takes the form of a curve on a graph showing how material deformation depends on the speed of the car.

What sectors use digital simulations the most?  

MP: I talk about the automobile industry a lot since it’s one of the first to have used digital simulations. Architects were also among the first to use 3D to visualize models. And factories and relatively complex industrial facilities rely on simulation too. Among other things, it allows them to analyze the piping systems behind the walls. It’s a way to access information more easily than with plans. On the other hand, there are sectors, such as construction and civil engineering, where simulation is under-utilized and plans still play a central role.

What are some major ways digital simulation could evolve in the near future?

MP: In my opinion, interaction between humans and 3D models represents a big challenge. New devices like virtual reality glasses are being used, but the way people interact with the model remains unnatural. Yes, from within a virtual space, users can change how rooms are arranged with a wave of the hand. But if they want to change the physical parameters behind a material’s behavior, they still have to use a computer to introduce raw data in coded form. It would be a major advance to be able to directly change these metadata from within the virtual environment.