Véronique Bellon-Maurel

Véronique Bellon-Maurel: from infrared spectroscopy to digital agriculture

Measuring and quantifying have informed Véronique Bellon-Maurel’s entire scientific career. A pioneer in near infrared spectroscopy, the researcher’s work has ranged from analyzing fruit to digital agriculture. Over the course of her fundamental research, Véronique Bellon-Maurel has contributed to the optimization of many industrial processes. She is now the Director of #DigitAg, a multi-partner Convergence Lab, and is the winner of the 2019 IMT-Académie des Sciences Grand Prix. In this wide-ranging interview, she retraces the major steps of her career and discusses her seminal work.   

 

You began your research career by working with fruit. What did this research involve?

Véronique Bellon-Maurel: My thesis dealt with the issue of measuring the taste of fruit in sorting facilities. I had to meet industrial requirements, particularly in terms of speed: three pieces of fruit per second! The best approach was to use near infrared spectroscopy to measure the sugar level, which is indicative of taste. But when I was beginning my thesis in the late 1980s, it took spectrometers one to two minutes to scan a piece of fruit. I suggested working with very near infrared, meaning a different type of radiation than the infrared that had been used up to then, which made it possible to use new types of detectors that were very fast and inexpensive.

So that’s when you started working on near infrared spectroscopy (NIRS), which went on to became your specialization. Could you tell us what’s behind this technique with such a complex name?

VBM: Near infrared spectroscopy (NIRS) is a method for analyzing materials. It provides a simple way to obtain information about the chemical and physical characteristics of an object by illuminating it with infrared light, which will pass through the object and become charged with information. For example, when you place your finger on your phone’s flashlight, you’ll see a red light shining through it. This light is red because the hemoglobin has absorbed all the other colors of the original light. So this gives you information about the material the light has passed through. NIRS is the same thing, except that we use particular radiation with wavelengths that are located just beyond the visible spectrum.

Out of all the methods for analyzing materials, what makes NIRS unique?

VBM: Near infrared waves pass through materials easily. Much more easily than “traditional” infrared waves which are called “mid-infrared.” They are produced by simple sources such as sunlight or halogen lamps. The technique is therefore readily available and is not harmful: it is used on babies’ skulls to assess the oxygenation saturation of their brains! But when I was starting my career, there were major drawbacks to NIRS. The signal we obtain is extremely cluttered because it contains information about both the physical and chemical components of the object.

And what is hiding behind this “cluttered signal”?

VBM: In concrete terms, you obtain hill-shaped curves and the shape of these curves depends on both the object’s chemical composition and its physical characteristics. You’ll get a huge hill that is characteristic of water. And the signature peak of sugar, which allows you to calculate a fruit’s sugar level, is hidden behind it. That’s the chemical component of the spectrum obtained. But the size of the hills also depends on the physical characteristics of your material, such as the size of the particles or cells that make it up, physical interfaces — cell walls, corpuscles — the presence of air etc. Extracting solely the information we’re interested in is a real challenge!

Near infrared spectrums of apples.

 

One of your earliest significant findings for NIRS was precisely that – separating the physical component from the chemical component on a spectrum. How did you do that?

VBM: The main issue at the beginning was to get away from the physical component, which can be quite a nuisance. For example, light passes through water, but not the foam in the water, which we see as white, even though they are the same molecules! Depending on whether or not the light passes through foam, the observation — and therefore the spectrum — will change completely. Fabien Chauchard was the first PhD student with whom I worked on this problem. To better understand this optical phenomenon, which is called diffusion, he went to the Lund Laser Center in Sweden. They have highly-specialized cameras: time-of-flight cameras, which operate at a very high speed and are able to capture photos “in flight.” We send photons onto a fruit in an extremely short period of time and we recover the photons as they come out since not all of them come out at the same time. In our experiments, if we place a transmitter and a receiver on a fruit spaced 6 millimeters apart, when they came out, certain photons had travelled over 20 centimeters! They had been reflected, refracted, diffracted etc. inside the fruit. They hadn’t travelled in a straight line at all. This gave rise to an innovation, spatially resolved spectroscopy (SRS) developed by the Indatech company that Fabien Chauchard started after completing his PhD.

We looked for other optical arrangements for separating the “chemical” component from the “physical” component. Another PhD student, Alexia Gobrecht, with whom I worked on soil, came up with the idea of using polarized near infrared light. If the photons penetrate the soil, they lose their polarization. Those that have only travelled on the surface conserve it. By differentiating between the two, we recover spectrums that only depend on the chemical component. This research on separating chemical and physical components was continued in the laboratory, even after I stopped working on it. Today, my colleagues are very good at identifying aspects that have to do with the physical component of the spectrum and those that have to do with to the chemical component. And it turns out that this physical component is useful! And to think that twenty years ago, our main focus was to get rid of it.

After this research, you transitioned from studying fruit to studying waste. Why did you change your area of application?

VBM: I’d been working with the company Pellenc SA on sorting fruit since around 1995, and then on detectors for grape ripeness. Over time, Pellenc transitioned to waste characterization for the purpose of sorting, based on the infrared knowledge developed through sorting fruit. They therefore called on us, with a new speed requirement, but this one was much tougher. A belt conveyor moves at a speed of several meters per second. In reality, the areas of application for my research were already varied. In 1994, while I was still working on fruit with Pellenc, I was also carrying out projects for biodegradable plastics. NIRS made it possible to provide quality measurements for a wide range of industrial processes. I was Ms. “Infrared sensors!”

 

“I was Ms. ‘Infrared sensors’!”
– Véronique Bellon-Maurel

 

Your work on plastics was among the first in the scientific community concerning biodegradability. What were your contributions in this area?

VBM: 1990 was the very beginning of biodegradable plastics. Our question was determining whether we could measure a plastic’s biodegradability in order to say for sure, “this plastic is truly biodegradable.” And to do so as quickly as possible, so why not use NIRS? But first, we had to define the notion of biodegradability, with a laboratory test. For 40 days, the plastics were put in reactors in contact with microorganisms, and we measured their degradation. We were also trying to determine whether this test was representative of biodegradability in real conditions, in the soil. We buried hundreds of samples in different plots of land in various regions and we dug them up every six months to compare real biodegradation and biodegradation in the laboratory. We wanted to the find out if the NIRS measurement was able to achieve the same result, which was estimating the degradation kinetics of a biodegradable plastic – and it worked. Ultimately, this benchmark research on the biodegradability of plastics contributed to the industrial production and deployment of the biodegradable plastics that are now found in supermarkets.

For that research, was your focus still on NIRS?

VBM: The crux of my research at that time was the rapid, non-destructive characterization — physical or chemical— of products. NIRS was a good tool for this. We used it again after that on dehydrated household waste in order to assess the anaerobic digestion potential of waste. With the laboratory of environmental biotechnology in Narbonne, and IMT Mines Alès, we developed a “flash” method to quickly determine the quantity of bio-methane that waste can release, using NIRS. This research was subsequently transferred to the Ondalys company, created by Sylvie Roussel, one of my former PhD students. My colleague Jean-Michel Roger is still working with them to do the same thing with raw waste, which is more difficult.

So you gradually moved from the agri-food industry to environmental issues?

VBM: I did, but it wasn’t just a matter of switching topics, it also involved a higher degree of complexity. In fruit, composition is restricted by genetics – each component can vary within a known range. With waste, that isn’t the case! This made environmental metrology more interesting than metrology for the food industry. And my work became even more complex when I started working on the topic of soil. I wondered whether it would be possible to easily measure the carbon content in soil. This took me to Australia, to a specialized laboratory at the University of Sydney. To my mind, all this different research is based on the same philosophy: if you want to improve something, you have to measure it!

So you no longer worked with NIRS after that time? 

VBM: A little less, since I changed from sensors to assessment. But even that was a sort of continuation: when sensors were no longer enough, how could we make measurements? We had to develop assessment methods. It’s very well to measure the biodegradability of a plastic, but is that enough to successfully determine if that biodegradable plastic has a low environmental impact? No, it isn’t – the entire system must be analyzed. I started working on life-cycle analysis (LCA) in Australia after realizing that LCA methods were not suited to agriculture: they did not account for water, or notions of using space. Based on this observation, we improved the LCA framework to develop the concept of a regional LCA, which didn’t exist at the time, allowing us to make an environmental assessment of a region and compare scenarios for how this region would evolve. What I found really interesting with this work was determining how to use data from information systems and sensors to build the most reliable and reproducible model as possible. I wanted the assessments to be as accurate as possible. This is what led me to my current field of research – digital agriculture.

Read more on I’MTech: The many layers of our environmental impact

In 2013 you founded #DigitAg, an institute dedicated to this topic. What research is carried out there?

VBM: The “Agriculture – Innovation 2025” report submitted to the French government in 2015 expresses a need to structure French research on digital agriculture. We took advantage of the opportunity to create Convergence Labs by founding the #DigitAg, Digital Agriculture Convergence Lab. It’s one of ten institutes funded by the Investments in the Future program. All of these institutes were created in order to carry out interdisciplinary research on a major emerging issue. At #DigitAg, we draw on engineering sciences, digital technology, biology, agronomy, economy, social sciences, humanities, management etc. Our aim is to establish knowledge bases to ensure that digital agriculture develops in a harmonious way. The challenge is to develop technologies but also to anticipate how they will be used and how such uses will transform agriculture – we have to predict how technologies will be used and the impacts they will have to help ensure ethical uses and prevent misuse. To this end, I’ve also set up a living lab, Occitanum — for Occitanie Digital Agroecology — set to start in mid-2020. The lab will bring together stakeholders to assess the use value of different technologies and understand innovation processes. It’s a different way of carrying out research and innovation, by incorporating the human dimension.

applis mobiles, mobile apps

Do mobile apps for kids respect privacy rights?

The number of mobile applications for children is rapidly increasing. An entire market segment is taking shape to reach this target audience. Just like adults, the personal data issue applies to these younger audiences. Grazia Cecere, a researcher in the economics of privacy at Institut Mines-Télécom Business School, has studied the risk of infringing on children’s privacy rights. In this interview, she shares the findings from her research.

 

Why specifically study mobile applications for children?

Grazia Cecere: A report from the NGO Common Sense reveals that 98% of children under the age of 8 in the United States use a mobile device. They spend an average of 48 minutes per day on the device. That is huge, and digital stakeholders have understood this. They have developed a market specifically for kids. As a continuation of my research on the economics of privacy, I asked myself how the concept of personal data protection applied to this market. Several years ago, along with international researchers, I launched a project dedicated to these issues. The project was also launched thanks to funding from Vincent Lefrere’s thesis within the framework of the Futur & Ruptures program.

Do platforms consider children’s personal data differently than that of adults?

GC: First of all, children have a special status within the GDPR in Europe (General Data Protection Regulation). In the United States, specific legislation exists: COPPA (Children’s Online Privacy Protection Act). The FTC (Federal Trade Commission) handles all privacy issues related to users of digital services and pays close attention to children’s rights. As far as the platforms are concerned, Google Play and App Store both have Family and Children categories for children’s applications. Both Google and Apple have expressed their intention to separate these applications from those designed for adults or teens and ensure better privacy protection for the apps in these categories. In order for an app to be included in one of these categories, the developer must certify that it adheres to certain rules.

Is this really the case? Do apps in children’s categories respect privacy rights more than other applications?

GC: We conducted research to answer that question. We collected data from Google Play on over 10,000 mobile applications for children, both within and outside the category. Some apps choose not to certify and instead use keywords to target children. We check if the app collects telephone numbers, location, usage data, and whether they access other information on the telephone. We then compare the different apps. Our results showed that, on average, the applications in the children’s category collect fewer personal data and respect users’ privacy more than those targeting the same audience outside the category. We can therefore conclude that, on average, the platforms’ categories specifically dedicated to children reduce the collection of data. On the other hand, our study also showed that a substantial portion of the apps in these categories collect sensitive data.

Do all developers play by the rules when it comes to protecting children’s personal data?

GC: App markets ask developers to provide their location. Based on this geographical data, we searched to see whether an application’s country of origin influenced its degree of respect for users’ privacy. We demonstrated that if the developer is located in a country with strong personal data regulations—such as the EU, the United States and Canada—it generally respects user privacy more than a developer based in a country with weak regulation. In addition, developers who choose not to provide their location are generally those who collect more sensitive data.

Are these results surprising?

GC: In a sense, yes, because we expected the app market to play a role in respecting personal data. These results raise the question of the extra-territorial scope of the GDPR, for example. In theory, whether an application is developed in France or in India, if it is marked in Europe, it must respect the GDPR. However, our results show that among countries with a weak regulation, the weight of the legislation in the destination market is not enough to change the developers’ local practices. I must emphasize that offering an app to all countries is extremely easy—it is even encouraged by the platforms, which makes it even more important to pay special attention to this issue.

What does this mean for children’s privacy rights?

GC: The developers are the owners of the data. Once personal data is collected by the app, it is sent to the developer’s servers, generally in the country where they are located. The fact that foreign developers pay less attention to protecting users’ privacy means that the processing of this data is probably also less respectful of this principle.

 

Mihai Miron

Ioan-Mihai Miron: Magnetism and Memory

Ioan Mihai Miron’s research in spintronics focuses on new magnetic systems for storing information. The research carried out at Spintec laboratory in Grenoble is still young, having begun in 2011. However, it already represents major potential in addressing the current limits facing technology in terms of our computers’ memory. The research also offers a solution to problems experienced by magnetic memories until now, which have prevented their industrial development. Ioan-Mihai Miron received the 2018 IMT-Académie des sciences Young Scientist Award for his groundbreaking and promising research. 

 

Ioan-Mihai Miron’s research is a matter of memory… and a little architecture too. When presenting his work on the design of new nanostructures for storing information, the researcher from Spintec* uses a three-level pyramid diagram. The base represents broad and robust mass memory. Its large size enables it to store large amounts of information, but it is difficult to access. The second level is the central memory, which is not as big but faster to access. It includes the information required to launch programs. Finally, the top of the pyramid is cache memory, which is much smaller but more easily accessible. “The processor only works with this cache memory,” the researcher explains. “The rest of the computer system is there to retrieve information lower down in the pyramid as fast as possible and bring it up to the top.

Of course, computers do not actually contain pyramids. In microelectronics, this memory architecture takes the form of thousands of microscopic transistors that are responsible for the central and cache memories. They work as switches, storing the information in binary format and either letting the current circulate or blocking it. With the commercial demand for miniaturization, transistors have gradually reached their limit. “The smaller the transistor, the greater the stand-by consumption,” Ioan-Mihai Miron explains. This is why the goal is now for the types of memory located at the top of the pyramid to rely on new technologies based on storing information at the electronic level. By modifying the current sent into magnetic material, the magnetization can be altered at certain points. “The material’s electrical resistance will be different based on this magnetization, meaning information is being stored,” Ioan-Mihai Miron explains. In simpler terms, a high electrical resistance corresponds to one value, a low resistance to another, which forms a binary system.

In practical terms, information is written in these magnetic materials by sending two perpendicular currents, one from above and one from below the material. The point of intersection is where the magnetization is modified. While this principle is not new, it still is not currently used for cache memory in commercial products. Pairing magnetic technologies with this type of data storage has remained a major industrial challenge for almost 20 years. “Memory capacities are still too low in comparison with transistors, and miniaturizing the system is complicated,” the researcher explains. These two disadvantages are not offset by the energy savings that the technology offers.

To compensate for these limitations, the scientific community has developed a simplified geometry of these magnetic architectures. “Rather than intersecting two currents, a new approach has been to only send a single linear path of current into the material,” Ioan-Mihai Miron explains. “But while this technique solved the miniaturization and memory capacity problems, it created others.” In particular, writing the information involves applying a strong electric current that could damage the element where the information is stored. “As a result, the writing speed is not sufficient. At 5 nanoseconds, it is slower than the latest generations of transistor-based memory technology.

Electrical geometry

In the early 2010s, Ioan-Mihai Miron’s research opened major prospects for solving all these problems. By slightly modifying the geometry of the magnetic structures, he demonstrated the possibility of writing at speeds in under a nanosecond. And the same size offers a greater memory capacity. The principle is based on the use of a current sent into a plane that is parallel to the layers of the magnetized material, whereas previously the current had been perpendicular. This difference makes the change in magnetization faster and more precise. The technology developed by Ioan-Mihai Miron offers still more benefits: less wear on the elements and the elimination of writing errors. It is called SOT-MRAM, for Spin-Orbit Torque Magnetic Random Access Memory. This technical name reflects the complexity of the effects at work in the layers of electrons of the magnetic materials exposed to the interactions of the electrical currents.

The nanostructures developed by Ioan-Mihai Miron and his team are opening new prospects for magnetic memories.

 

The progressive developments of magnetic memories may appear minimal. At first glance, a transition from two perpendicular currents to one linear current to save a few nanoseconds seems to be only a minor advance. However, the resulting changes in performance offer considerable opportunities for industrial actors. “SOT-MRAM has only been in existence since 2011, yet all the major microelectronics businesses already have R&D programs on this technology that is fresh out of the laboratory,” says Ioan-Mihai Miron. SOT-MRAM is perceived as the technology that is able to bring magnetic technologies to the cache memory playing field.

The winner of the 2018 IMT – Académie des Sciences 2018 Young Scientist award seeks to remain realistic regarding the industrial sector’s expectations for SOT-MRAM. “Transistor-based memories are continuing to improve at the same time and have recently made significant progress,” he notes. Not to mention that these technologies have been mature for decades, whereas SOT-MRAM has not yet passed the ten-year milestone of research and sophistication. According to Ioan-Mihai Miron, this technology should not be seen as a total break with previous technology, but as an alternative that is gradually gaining ground, albeit rapidly and with significant competitive opportunities.

But there are still steps to be made to optimize SOT-MRAM and have it integrated into our computer products. These steps may take a few years. In the meantime, Ioan-Mihai Miron is continuing his research on memory architectures, while increasingly entrusting SOT-MRAM to those who are best suited to transferring it to society. “I prefer to look elsewhere rather than working to improve this technology. What interests me is discovering new capacities for storing information, and these discoveries happen a bit by chance. I therefore want to try other things to see what happens.

*Spintec is a mixed research unit of CNRS, CEA, Université Grenoble Alpes.

[author title=”Ioan-Mihai Miron: a young expert in memory technology” image=”https://imtech-test.imt.fr/wp-content/uploads/2018/11/mihai.png”]

Ioan-Mihai Miron is a researcher at the Spintec laboratory in Grenoble. His major contribution involves the discovery of the reversal of magnetization caused by spin orbit coupling. This possibility provides significant potential for reducing energy consumption and increasing the reliability of MRAM, a new type of non-volatile memory that is compatible with the rapid development of the latest computing processors. This new memory should eventually come to replace SRAM memories alongside processors.

Ioan-Mihai Miron is considered a world expert, as shown by the numerous citations of his publications (over 3,000 citations in a very short period of time). In 2014 he was awarded the ERC Starting Grant. His research has also led to several patents and contributed to creating the company Antaios, which won the Grand Prix in the I-Lab innovative company creation competition in 2016. Fundraising is currently underway, demonstrating the economic and industrial impacts of the work carried out by the winner of the 2018 IMT-Académie des Sciences Young Scientist award.[/author]

OMNI

OMNI: transferring social sciences and humanities to the digital society

I’MTech is dedicating a series of articles to success stories from research partnerships supported by the Télécom & Société Numérique Carnot Institute (TSN), to which IMT Atlantique belongs.

[divider style=”normal” top=”20″ bottom=”20″]

Belles histoires, Bouton, CarnotTechnology transfer also exists in social sciences and the humanities! The OMNI platform in Brittany proves this by placing its research activities at the service of organizations. Attached to the Breton scientific interest group called M@rsouin (which IMT Atlantique manages), it brings together researchers and professionals to study the impact of digital technology on society. The relevance of the structure’s approach has earned it a place within the “technology platform” offering of the Télécom & Société Numérique Carnot Institute (see insert at the end of the article). Nicolas Jullien, a researcher in Digital Economy at IMT Atlantique and Manager of OMNI, tells us more about the way in which organizations and researchers collaborate on topics at the interface between digital technology and society.

 

What is the role of the OMNI platform?

Nicolas Jullien: Structurally, OMNI is attached to the scientific interest group called M@rsouin, comprised of four universities and graduate schools in Brittany and, recently, three universities in Pays de la Loire*. For the past 15 years, this network has served the regional objective of having a research and study system on ICT, the internet and, more generally, what is today referred to as digital technology. OMNI is the research network’s tool for proposing studies on the impact of digital technology on society. The platform brings together practitioners and researchers and analyzes major questions that public or private organizations may have. It then sets up programs to collect and evaluate information to answer these questions. According to the needs, we can carry out questionnaire surveys – quantitative studies – or interview surveys – which are more qualitative. We also guarantee the confidentiality of responses, which is obviously important in the context of the GDPR. It is first and foremost a guarantee of neutrality between the player who wishes to collect information and the responding actors.

So is OMNI a platform for making connections and structuring research?

NJ: Yes. In fact, OMNI has existed for as long as M@rsouin, and corresponds to the part just before the research phase itself. If an organization has questions about digital technology and its impact and wants to work with the researchers at M@rsouin to collect and analyze information to provide answers, it goes through OMNI. We help establish the problem and express or even identify needs. We then investigate whether there is a real interest for research on the issue. If this is the case, we mobilize researchers at M@rsouin to define the questions and the most suitable protocol for the collection of information, and we carry out the collection and analysis.

What scientific skills can you count on?

NJ: M@rsouin has more than 200 researchers in social sciences and humanities. Topics of study range from e-government to e-education via social inclusion, employment, consumption, economic models, operation of organizations and work. The disciplines are highly varied and allow us to have a very comprehensive approach to the impact of digital technology on an organization, population or territory… We have researchers in education sciences, ergonomics, cognitive or social psychology, political science and, of course, economists and sociologists. But we also have disciplines which would perhaps be less expected by the general public, but which are equally important in the study of digital technology and its impacts. These include geography, urban planning, management sciences and legal expertise, which has been closely involved since the development of wide-scale awareness of the importance of personal data.

The connection between digital technology and geography may seem surprising. What is a geographer’s contribution, for example, to the question of digital technology?

NJ: One of the questions raised by digital technology is that of access to online resources. Geographers are specifically interested in the relationship between people and their resources and territory. Incorporating geography allows us to study the link between territory and the consumption of digital resources, or even to more radically question the pertinence of physical territory in studies on internet influence. It is also a discipline that allows us to examine certain factors favoring innovation. Can we innovate everywhere in France? What influence does an urban or rural territory have on innovation? These are questions asked in particular by chambers of commerce and industry, regional authorities or organizations such as FrenchTech.

Why do these organizations come to see you? What are they looking for in a partnership with a scientific interest group?

NJ: I would say that these partners are looking for a new perspective. They want new questions or a specialist point of view or expert assessment in complex areas. By working with researchers, they are forced to define their problem clearly and not necessarily seek answers straight away. We are able to give them the breathing space they need. But we can only do so if our researchers can make proposals and be involved in partners’ problems. We propose services, but are not a consultancy agency: our goal remains to offer the added value of research.

Can you give an example of a partnership?

NJ: In 2010 we began a partnership with SystemGIE, a company which acts as an intermediary between large businesses and small suppliers. It manages the insertion of these suppliers in the purchasing or production procedures of large clients. It is a fairly tricky positioning: it is necessary to understand the strategy of suppliers and large companies and the tools and processes to put in place… We supported SystemGIE in the definition of its atypical economic model. It is a matter of applied research because we try to understand where the value lies and the role of digital technology in structuring these operators. This is an example of a partnership with a company. But our biggest partner remains the Brittany Regional Council. We have just finished a survey with it on craftspeople. The following questions were asked: how do craftspeople use digital technology? How does their online presence affect their activity?

How does the Carnot label help OMNI?

NJ: First and foremost it is a recognition of our expertise and relevance for organizations. It also provides better visibility at a national institutional level, allowing us to further partnerships with public organizations across France, as well as providing better visibility among private actors. This will allow us to develop new, nationwide partnerships with companies on the subject of the digitization of society and the industry of the future.

* The members of M@rsouin are: Université de Bretagne Occidentale, Université de Rennes 1, Université de Rennes 2, Université de Bretagne Sud, IMT Atlantique, ENSAI, ESPE de Bretagne, Sciences Po Rennes, Université d’Angers, Université du Mans, Université de Nantes.

 

[divider style=”normal” top=”20″ bottom=”20″]

A guarantee of excellence in partnership-based research since 2006

The Télécom & Société Numérique Carnot Institute (TSN) has been partnering with companies since 2006 to research developments in digital innovations. With over 1,700 researchers and 50 technology platforms, it offers cutting-edge research aimed at meeting the complex technological challenges posed by digital, energy and industrial transitions currently underway in in the French manufacturing industry. It focuses on the following topics: industry of the future, connected objects and networks, sustainable cities, transport, health and safety.

The institute encompasses Télécom ParisTech, IMT Atlantique, Télécom SudParis, Institut Mines-Télécom Business School, Eurecom, Télécom Physique Strasbourg and Télécom Saint-Étienne, École Polytechnique (Lix and CMAP laboratories), Strate École de Design and Femto Engineering.

[divider style=”normal” top=”20″ bottom=”20″]

Photomécanique Jean-José Orteu

What is photomechanics?

How can we measure the deformation of a stratospheric balloon composed only of an envelope a few micrometers thick? It is impossible to attach a sensor to it because this would distort the envelope’s behavior… Photomechanics, which refers to measurement methods using images and computer analysis, makes it possible to measure this deformation or a material’s temperature without making any contact. Jean-José Orteu, a researcher in artificial vision for photomechanics, control and monitoring at IMT Mines Albi, explains the principles behind photomechanical methods, which are used in the aeronautics, automotive and nuclear industries.

 

What is photomechanics?

Jean-José Orteu: We can define photomechanics as the application of optical measurements to experimental mechanics and, more specifically, the study of the behavior of materials and structures. The techniques that have been developed are used to measure materials’ deformation or temperature.

Photomechanics is a relatively young discipline, roughly 30 years old. It is based on around ten different measurement techniques that can be applied to both a nanoscale and the dimensions of an airplane and to both static and dynamic systems. Among these different techniques, two are primarily used: the digital image correlation (DIC) method for measuring deformations, and the infrared thermography method for measuring temperatures.

 

How are these two techniques implemented?

JJO: For the DIC, we position one or several cameras in front of a material: only one for a planar material that undergoes in-plane deformation and several for the measurement of a three-dimensional material. The cameras film the material as it is deformed under the effect of mechanical stress and/or heat. Once the images are taken, the deformation of the material is calculated based on the deformation of the images obtained: if the material is deformed, so is the image.  This deformation is measured using computer processing and is extrapolated to the material.

This is referred to as the white light method because the material is lit by an incoherent light from standard lighting. Other more complex photomechanical techniques require the use of a laser to light the material: these are referred to as interferometric methods.  They are useful for very fine measurements of displacements in the micrometer or nanometer range.

The second most frequently used technique in photomechanics is infrared thermography, which is used to measure temperatures. This uses the same process as the DIC technique, with the initial acquisition of infrared images followed by the computer processing of these images to determine the temperature of the observed material. Calculating a temperature using an image is no easy task. The material’s thermo-optical properties must be taken into account as well as the measuring environment.

With all of these techniques, we can analyze the dynamic evolution of the distortion or temperature. The material is therefore analyzed both spatially and temporally.

photomécanique Jean-José Orteu

photomechanics, Jean-José Orteu

Stereo-DIC measurement of the deformation field of a sheet of metal shaped using incremental forming

 

What type of camera is used for these measurement methods?

JJO: While camera resolution influences the quality and precision of the measurements, a traditional camera can already obtain good results. However, to study very fast phenomena, such as the impact of a bird in flight on an aircraft fuselage, very fast cameras are needed, which can take 10,000, 100,000 or even 1,000,000 images per second! In addition, for temperature measurements, infrared-sensitive cameras must be used.

 

What is the value of optical measurements as compared to other measurement methods?

JJO: Traditionally, a strain gauge is used to measure the deformation of a material. A strain gauge is a sensor that is glued or welded to the surface of the material to provide an isolated indication of its deformation. This gauge must be as nonintrusive as possible and must not alter the object’s behavior. The same problem exists for temperature measurements. Traditional techniques use a thermocouple, a temperature sensor that is also welded to the surface of the material. When the sensors are very small compared to the material, they are nonintrusive and therefore do not pose a problem. Yet for some applications, the use of contact sensors is impossible. For example, at IMT Mines Albi we worked on the deformation of a parachute when it inflates. But the canvas contained a lining only a few micrometers thick. A gauge would have been difficult to glue to it and would have greatly disrupted the material’s behavior. In this type of situation, photomechanics is indispensable, since no contact is required with the object.

Finally, both the gauge and the thermocouple offer only isolated information, only at the spot where the sensor is glued. You won’t get any information concerning a spot only ten centimeters away from the sensor. However, the problem in mechanics is that, most of the time, we do not know exactly where we will need information about deformation or the temperature. The risk is therefore that of not welding or gluing the sensors in the spots where the deformation or temperature measurement is the most relevant. The optical methods also offer field information: a deformation field or temperature field.  We can therefore view the material’s entire surface, including the areas where the deformation or temperature gradient is more significant.

 

Photomécanique Jean-José Orteu

Photomécanique Jean-José Orteu

phtomechanics

Top, a material instrumented with gauges (only 6 measurement points). Middle, the same material to which speckled paint has been added to implement the optical DIC technique. Bottom, the deformation field measured via DIC (hundreds of measurement points).

What are the limitations of photomechanics?

JJO: In the beginning, photomechanical methods based on the use of cameras could not measure surface deformations. But over the last five or six years, an entire segment of photomechanics has begun to focus on deformations within objects.  These new techniques require the use of specific sensors, tomographs. They make it possible to take X-ray images of the materials, which reveal core deformations after computer processing. The large volumes of data this technique generates raise big data issues.

In terms of temperature, the core measurement without contact is more complicated. We recently defended a thesis at IMT Mines Albi on a method that makes it possible to measure the temperature in a material’s core based on the fluorescence phenomenon. The results are very promising, but the research must be continued to obtain industrial applications.

In addition, despite its many advantages, photomechanics has not yet fully replaced strain gauges and thermocouples. In fact, optical measurement techniques have not yet been standardized. Typically, when measuring a deformation with a gauge, the method of measurement is standardized: what type of gauge is it?  How should it be attached to the material? A precise methodology must be followed. In photomechanics, whether in the choice of camera and its calibration and position, or the image processing in the second phase, everything is variable, and everyone creates his or her own method. In terms of certification, some industrial stakeholders therefore remain hesitant about the use of these methods.

There is also still work to be done in assessing measurement uncertainties. The image acquisition chain and processing procedure can be complex, and errors can distort the measurements in any stage. How can we ensure there are as few errors as possible? How can we assess measurement uncertainties? Research in this area is underway. The long-term goal is to be able to systematically provide a measurement field with a range of associated uncertainties. Today, this assessment remains complicated, especially for non-experts.

Nevertheless, despite these difficulties, the major industries that need to define the behavior of materials, such as the automotive, aeronautics and nuclear industries, all use photomechanics. And although progress must be made in assessing measurement uncertainties and establishing standardization, the results these optical methods achieve are often of better quality than those of traditional methods.

 

TechDay, fabrication additive, additive manufacturing

What are the latest innovations in additive manufacturing?

Although additive manufacturing is already fully integrated into industrial processes, it is continuing to develop thanks to new advances in technology. The Additive Manufacturing Tech’Day, co-organized by IMT Mines Alès and Materiautech – Allizé-Plasturgie, brought together manufacturers and industry stakeholders for a look at new developments in equipment and material. José-Marie Lopez Cuesta, Director of the Materials Center at IMT Mines Alès, spoke with us about this event and the latest innovations in 3D printing.

 

What were the objectives of the Additive Manufacturing Tech’Day?

This event, which brought together nearly ninety people, was co-organized by IMT Mines Alès and Materiautech, which is network of institutions that organizes educational, technological and business activities on different plastic materials and processes for manufacturers and students. This provided an opportunity for several industry stakeholders to present their new developments in materials, tools and software through a series of conferences and demonstrations.

For us as researchers, the main objective of this tech day was to present our strategy in this area and build partnerships, particularly with manufacturers, with the aim of initiating projects.

 

What research projects are you currently working on in the area of additive manufacturing?

We have had the machines in the laboratory for a little over a year now, and we are beginning to launch projects. We just started a project focused primarily on engineering, for manufacturing an orthopedic brace, a medical corset. We also have a project in the initial development stages on SLS (Selective Laser Sintering) additive manufacturing technology, in partnership with a company based in Alès, and with potential funding from the region.

 

Has industry successfully taken advantage of 3D printing technologies?

Yes, absolutely. Today, 3D printing is seen as one of the major advanced manufacturing technologies.  It is developing very quickly, with the emergence of new machines and new materials. As a laboratory, we want to be a part of this development.

For manufacturers, the goal is to develop new products with original shapes that could not be formed using traditional processes, while ensuring that they are durable and possess the mechanical properties required for their use.

Although it was initially used for rapid prototyping, 3D printing is now being used in all industrial sectors, particularly in the aerospace and medical industries, due to the complexity of the parts they produce. In the medical industry, additive manufacturing is used to produce prostheses and orthoses, as well as intracorporeal medical devices such as stents, mesh inserted in the arteries to prevent clogging, and surgical screws. Manufacturing these parts requires the use of biocompatible and approved materials, an aspect mastered by certain companies, which produce these materials as polymer powders or wires adapted to additive manufacturing.

In the aeronautics industry, this technology is used especially for printing very specific parts, for example for satellites. It allows parts to be replaced, especially metallic parts produced using molding techniques, by lighter and more functional 3D-printed parts. These parts are redesigned based on the innovations made available through additive manufacturing, which means they can be produced using as little material as possible, resulting in lighter parts.

Finally, 3D printing is perfectly adapted to manufacturing complex replacement parts for older devices that are no longer on the market. We are moving towards production means that are increasingly customized and flexible.

 

In additive manufacturing, what are the latest innovations in materials?

Materials are being developed that are increasingly complex. Nano-composites, for example, which are plastic materials comprising nanometric particles, offer improved mechanical properties, heat resistance and permeability to gas. New bio-composites are also being developed. These materials are composed of bio-based components and have a lower environmental impact than synthetic polymers. Other new materials present new features, such as fireproofing. We are seeking to enter these areas based on the areas of expertise that are already present at the Materials Center of IMT Mines Alès.

 

Beyond new materials, are there any new machines that have introduced significant innovations?

In this field, innovations appear very quickly: new machines are constantly coming out on the market. Some are even able to print several types of materials at the same time, or parts with increasingly complex symmetry. We also see greater precision in the components, and improved surface conditions.

In addition, one of the main issues is the speed of execution: enormous progress has been made in printing objects at greater speeds. This progress is what made it possible for 3D printing to expand beyond rapid prototyping and start being used for manufacturing production parts. In the automotive industry, for example, additive manufacturing technologies are in direct competition with other production processes.

Finally, 3D printers are more and more affordable. You can find €2,000 or €3,000 machines on the market. You can easily acquire a 3D printer for home use or take a sharing economy approach and use the printer within a joint ownership property. Now anyone can manufacture their own parts, and repair or further develop devices.

Also read on I’MTech:

When Science Fiction Helps Popularize Science – An Interview with Roland Lehoucq

What is energy? What is power? Roland Lehoucq, an astrophysicist at CEA and professor at École Polytechnique and Sciences Po, uses Science Fiction to help explain scientific principles to the general public. Star Wars, Interstellar, The Martian… These well-known, popular movies can become springboards for making science accessible to everyone. During his conference on “Energy, Science and Fiction” on December 7th at IMT Mines Albi, Roland Lehoucq explained his approach to popularizing science.

 

What approach are you taking for this conference on “Energy, Science and Fiction”? How do you use science fiction to study science?

The goal of this conference is to use science fiction as a springboard for talking to the general public about science. I chose the cross-cutting theme of energy and used several science fiction books and movies to talk about the topic. This drives us to ask questions about the world we live in: what prevents us from doing the things we see in science fiction? This question serves as a starting point for looking at scientific facts: explaining what energy and power are, providing some of the properties and orders of magnitude, etc. In general, the fictional situations involve levels of energy and power that are so significant that, for now, they are beyond our reach.  Humanity does have a great deal of energy within its grasp, which is why it has been able to radically transform planet Earth. But will this abundance of energy last? Will we someday reach the levels we see in science fiction? I’m not so sure!

My approach is actually the same as that of science fiction. It dramatizes scientific and technical progress and is designed to make us think about the consequences of these developments. This can apply to energy, genetics, artificial intelligence, robots, etc. It involves questioning reality, but it has no qualms about distorting the facts make a more appealing story. Works of fiction pay no attention to significant scientific facts, choosing to happily ignore certain physical laws, yet this is not truly a problem. It does not affect the works’ narrative quality, nor does it change the questions they raise!

Does this type of approach allow you to reach a wider audience? Do you see this at your speaking events?

I don’t know if I am reaching a wider audience, but I do see that those in the audience, both young and old, are delighted to talk about these subjects. I use some of the best-known films, although they are not necessarily the most interesting ones from a scientific point of view. While Star Wars does not feature a lot of high-level thinking, it is nevertheless full of content, including energy, which can be analyzed scientifically. For example, we can estimate the Jedis’ power in terms of watts and rank them. My approach is then to say: let’s imagine this really exists, let’s look at the information we can draw from the film and, in return, what we can learn about our world. Young people respond positively since I use things that are part of their culture. But it works well with other generations too!

What led you to share scientific culture using science fiction as the starting point?

I have loved science since I was 6 years old. I started reading science fiction when I was 13. Then I taught about science as a group leader at astronomy camps from the age of 17 to 23. I have always enjoyed learning things and then talking about the aspects I find to be the most interesting, amazing and wonderful! It comes naturally to me!

Then, in the early 2000s, I decided I wanted to share my knowledge on a larger scale, through books and articles. I quickly got idea of using fictional literature, comic strips and the cinema as a way of sharing knowledge. Especially since no one was doing it then! If you want to talk about astrophysics, for example, you have people like Hubert Reeves, Michel Cassé, Marc Lachièze-Rey and Jean-Pierre Luminet who are making this knowledge accessible. I did not want to repeat what they were already doing so well. I wanted to break away and do something different adapted to my tastes!

What advice would you give to researchers on improving how they share scientific culture?

Sharing scientific knowledge with others is not intuitive for researchers because it essentially involves making difficult choices of only saying what is most useful for the general public in a limited amount of time. Often researchers focus their life work, intelligence and efforts on a very limited topic.  Of course, researchers will want to talk about this area of expertise. But to understand the reasons that led the researcher to work in this area, the audience first needs certain prerequisites. And if these prerequisites are not provided, or are incomplete, the audience cannot understand the interest of the subject and the issues being discussed. It is therefore necessary to take the time to explain what researchers see as general information. Therefore, for one hour of a conference, forty-five minutes must be spent presenting the prerequisites and fifteen minutes spent explaining the field of research. This requires making a choice to serve the field, to take a back seat and avoid the “specialist syndrome”, which involves talking only about what the specialist sees as important, their 10 or 15 years of research. This is a legitimate approach, but by doing this researchers risk losing their audience!

They must also try to make science “friendly”. Science is often seen as something complicated, which requires great effort to be understood. As is often the case, a lot of work is needed to understand the subtleties of these subjects. Our job therefore consists of facilitating access to these areas, and the methods chosen will depend on each individual’s interests. Finally, we must show the general public that science is not an accumulation of knowledge, but an intellectual process, a methodology. We can therefore study science as an educational exercise, using things that are not purely scientific, such as science fiction!

[box type=”shadow” align=”” class=”” width=””]

Roland LehoucqRoland Lehoucq

Associate Professor of Physics and former student of ENS, Roland Lehoucq is an astrophysicist at the CEA center at Paris-Saclay, and teaches at the Ecole Polytechnique and at the Institut d’Etudes Politiques de Paris. He has written numerous books for the popularization of science using the science fiction as the starting point, such as La SF sous les feux de la science and Faire de la Science avec Star Wars. He recently wrote a book on the dark ideas of physics, Les idées noires de la physique, published by Les belles lettres, in collaboration with Vincent Bontems, a philosopher of science, and illustrated by Scott Pennors. Black holes, dark matter, dark energy… This book looks at all these subjects through the eyes of an astrophysicist and a philosopher.

 

[/box]

Annales des Mines, Union numérique européenne, Réalités industrielles

The European digital union

This issue of Réalités Industrielles is devoted to several subjects central to European Commission’s strategy, such as the data economy, the economic and social functions of online platforms, and cybersecurity.

 

To build a Digital Single Market is to construct Europe’s future. Given the many crises facing Europe, it is more important than ever to project ourselves into the future and lay the foundations for a European Union where all citizens will be able to live better.

We are convinced that our future is digital, since the present is already digital. Day after day, the new technology accompanies us, as we buy, sell, study or work on line. This technology, now part of our environment, is evolving in fields ranging from health to education and culture, not to mention transportation or research and development. It does not reckon with borders.

For this reason, the European Commission has set as one of its ten policy priorities the creation of a Digital Single Market. After six months of exercising this mandate, we presented, in May 2015, an ambitious strategy with no fewer than sixteen major work areas. We stand at the midpoint, having presented half of our proposals to the members of the EU Parliament and Council of Ministers. We want to modernize existing regulations in the key areas of e-commerce, telecommunications, audiovisual media, cybersecurity and copyright law. By doing this, we want to stimulate innovation propelled, in particular, by the data economy. We are delighted to see this issue of Réalités Industrielles devoted to several subjects central to our strategy, such as the data economy, the economic and social functions of on-line platforms, and cybersecurity.

Through its articles (far from exhaustive) from persons active in this domain, this issue of Réalités Industrielles discusses some of the most important topics for conceiving of a digitized European Union.

It opens with a firsthand account from a Polish entrepreneur, Éric Salvat in data-mining, a lively field of activity in all countries, whether in the EU or not.

This article is followed by a series of viewpoints about a “digital  Europe” with focus on, respectively: the geopolitics of data; the geopolitics of European policies and the policy of constructing common interests and defending EU achievements.

Policies directly related to the Digital Single Market are then brought under discussion: competition, integration of the socially vulnerable, personal data and digital platforms, defense and security, and health. Topics related to data or platforms are, directly or indirectly, well represented herein.

 

Foreword by Andrus Ansip, vice-president of the European Commission in charge of a Digital Single Market and Günther Hermann Oettinger, European Commissioner on the Digital Economy and Society

Introduction by Jean-Pierre Dardayrol, engineer from École des Mines, Conseil Général de l’Économie

Download full texts of the articles

Roisin Owens received a ERC Consolidator Grant to carry on her work in the filed of bioelectronics.

Roisin Owens scores a hat-trick with the award of a third ERC grant

In December 2016, Roisin Owens received a Consolidator Grant from the European Research Council (ERC). Following her 2011 Starting Grant and her 2014 Proof of Concept Grant, it is therefore the third time the ERC rewards the quality of the projects she leads at Mines Saint-Étienne, in France. Beyond a funding source, this is also a prestigious peer recognition, since only around 300 Consolidator Grants are awarded to researchers each year[1]. We have asked Roisin Owens a few questions to better understand what a new ERC grant means for her and her work.

 

How do you feel now that you have been awarded a Consolidator Grant by the ERC?

Roisin Owens: I feel more confident. When I was awarded the Starting Grant in 2011, I thought I had been lucky, as if I had just been in the right place at the right time. But now I don’t think it is luck anymore. I think there is a true value in my work. Of the 13 researchers who evaluated my project proposal answering the call for the Consolidator Grant, 12 have qualified it as “outstanding” or “very good”. I knew the idea was good, but I also knew the grant was very competitive: there are some world class scientists in the running for it!

 

What does the Consolidator Grant gives you that the Starting Grant did not? 

RO: The Consolidator Grant brings a better scientific recognition. The Starting Grant rewards future potential and supports a young and promising researcher. So if  you have a good idea, a good thesis and some scientific publications you can be eligible. For the Consolidator, you need to have already been published at least ten articles as  a postdoctoral researcher or project leader — principal investigator. This means that this grant is dedicated to researchers who already have some scientific credibility, and for whom the ERC will consolidate a mid-career position.

 

How did your research take consistency along the ERC grants you received?

RO: The Starting Grant allowed me to start my work in bioelectronics. Since I am a biologist, I wanted to develop a set of technologies based on conducting polymers to measure biological activity in a non-invasive way. This is what I did in the Ionosense project. With the Consolidator, I want to go deeper. Now the technologies are functional, I will try to answer questions that have never been even asked yet, because researchers did not have access to the tools to do so.

Read more about the scientific work of Roisin Owen: When biology meets electronics

 

Which tools do your technologies give to researchers?

RO: When scientists work on cancer or on the effects of microorganisms on our biological system, they have to use animal experiments. This takes time and is expensive, notwithstanding the ethical concerns. Furthermore, the mouse is not necessarily a good model of the human organism. My idea is to perform in vitro modelling of biological systems that accurately reflect human physiology. To do this, I mimic the human body using complex 3D microfluidic systems that recreate fluid circulation in organs. Then I include electronics to monitor a variety of effects on this system. For me, it is a way of adapting technology to the reality of the biology. Currently, the opposite usually happens in laboratories: researchers force biology to adapt to the equipment!

 

Do you think you could be at this point in your research if you had not been awarded your ERC grants?

RO: Definitely not. First of all, the Starting Grant opened doors for me. It gave me some credibility and the possibility to build partnerships. For example, when I got the grant, I was able to recruit a postdoctoral fellow from Stanford, a top university in the US. I am not sure I could have recruited that person without the Starting Grant. ERC grants are the only ones in Europe to give you such independence. They provide 1.5-2 million euros for five years! This means you don’t spend so much of your time looking for money for research, and you can really focus on your work. The alternative would be to go through a national funding process, like those of the French national research agency [ANR], but this is not at the same scale: we are talking about 400 000 euros per project.

 

Between your Starting Grant and your Consolidator Grant, you received a Proof of Concept (POC) Grant. What was it for?

RO: It is small grant compared to the others: 150 000 euros over a single year. This one is dedicated to researchers who already have had another ERC Grant. As its name suggests, it provides you some extra money to generate a proof of concept. If one of the technologies you have developed during your first grant shows some commercial potential, you can then explore this with a view to a more concrete application. In our first project — Ionosense — one part of the project looked promising in terms of commercialisation. With the POC, we were able to make a prototype. Now we have patented a technology for in vitro toxicology tests, and we are currently in negotiations with a company to produce the prototype. For me, it is very important to find applications for my research that could be useful for society, since my work is funded through taxes paid by European citizens.

 

Since we are talking about it: what are grants specifically used for?

RO: Essentially, to build a team. We are carrying out multidisciplinary work, so we need a wide range of expertise. I have a large expertise in multiple fields of biology, and I am starting to acquire a good knowledge of electronics, but I can’t cover everything. To help me, I have to recruit young, talented people, passionate about key topics of my project: microfluidics, analytical chemistry, 3D modelling of cellular biology, etc. Since I recruit them when they just have finished their thesis, they are up to date with the latest technologies. It is also important for me to hire young researchers and to train them, to stop the brain drain towards foreign countries.

 

Every scientist who gets an ERC Grant is doing valuable work. But with three ERC grants awarded to you, there is something more than quality. What is your secret?

RO: First, I am a native English speaker. I was born in Ireland, and was bilingual in Gaelic and English at an early age. This is a big advantage when you write project proposals. I also like to take time to let my ideas nurture and blossom. The ERC projects I submited were not just written in a few weeks before the deadline. They are well thought out over multiple months. I also have to thank the Cancéropôle PACA who provided financial support for me to consult with an advisor on project building. And I have to admit I truly have a secret weapon — two actually: my sisters. One is an editor for a Nature journal, and the other works on communications  in museums. Every time I write a proposal, I send it to them so they can help me polish it!

 

[1] In 2016, 314 researchers have been awarded a Consolidator Grant over 2274 projects evaluated by the ERC (success rate : 13.8%). In 2015, 302 researchers have been awarded the same grant over 2023 projects (14.0%). Source : ERC statistics.

ERC Grants, Francesco Andriulli, Yanlei Diao, Petros Elia, Roisin Owens

4 ERC Consolidator Grants for IMT

The European Research Council has announced the results of its 2016 Call for Consolidator Grants. Out of the 314 researchers to receive grants throughout Europe (across all disciplines), four come from IMT schools.

 

10% of French grants

These four grants represent 10% of all grants obtained in France, with 43 project leaders awarded from French institutions (therefore placing France in 3rd position, behind the United Kingdom with 58 projects and Germany with 48 projects).

For Christian Roux, the Executive Vice President for Research and Innovation at IMT, “this is a real recognition of the academic excellence of our researchers on a European level. Our targeted research model, which performs well in our joint research with the two very active Carnot institutes, will also benefit from ERC’s more fundamental work to support major scientific breakthroughs.”

Consolidator Grants reward experienced researchers with a sum of € 2 million to fund projects for a duration of five years, therefore providing them with substantial support.

 

[one_half][box]Francesco Andriulli, Télécom Bretagne, ERCAfter Claude Berrou in 2012, Francesco Andriulli is the second IMT Atlantique researcher to be honored by Europe as part of the ERC program. He will receive a grant of €2 million over five years, enabling him to develop his work in the field of computational electromagnetism. Find out more +
[/box][/one_half]

[one_half_last][box]Yanlei Diao, ERC, Télécom ParisTechYanlei Diao, a world-class scientist, recruited jointly by École Polytechnique, the Inria Saclay – Île-de-France Centre and Télécom ParisTech, has been honored for scientific excellence for her project as well as her innovative vision in terms of “acceleration and optimization of analytical computing for big data”. [/box][/one_half_last]

[one_half][box]

Petros Elia, Eurecom, ERCPetros Elia is a professor of Telecommunications at Eurecom and has been awarded this ERC Consolidator Grant for his DUALITY project (Theoretical Foundations of Memory Micro-Insertions in Wireless Communications).
Find out more +

[/box][/one_half]

[one_half_last][box]

Roisin Owens, Mines Saint-Étienne, ERCThis marks the third time that Roisin Owens, a Mines Saint-Étienne researcher specialized in bioelectronics, has been rewarded by the ERC for the quality of her projects. She received a Starting Grant in 2011 followed by a Proof of Concept Grant in 2014.
Find out more +
[/box][/one_half_last]