digital transformation

No, employees are not afraid of digital transformation

This article was originally published (in French) on The Conversation. By Emmanuel Baudoin, Institut Mines-Télécom Business School.

[divider style=”normal” top=”20″ bottom=”20″]

[dropcap]T[/dropcap]he 2019 edition of the study (in French) “French employees in the age of digital transformation” conducted by the HRM Digital Lab at Institut Mines-Télécom Business School shows that French employees are not afraid of digital transformation, and some even hope to see their company take greater steps towards it.

This study was conducted on a representative sample of just over 1,000 French employees and was managed by Opinion Way with support from CFA-EVE and the Essonne ANDRH (French Association of Human Resources Directors). An additional study was carried out with 100 decision-makers in the private sector, located in the Paris region, in order to collect their perceptions of digital transformation in their companies.

A multi-facetted transformation

The first finding is that 90% of the employees report that digital technology has had an impact on their job. This figure is unquestionable. However, it covers a wide variety of different situations. 48% of these employees consider that digital technology has had a great impact on their job while 42% say that it has just had a moderate impact, if any. As an illustration of the wide variety of situations, when asked how important digital technology was to their work,  11% said digital technology represented the core part of their work, 35% said digital technology represented a significant part of their work, 32% said digital technology was used only to support their work, 14% said digital technology was little-used in their work and 9% said that they did not use digital technology at all in their work.

The interviews carried out with decision-makers from companies of all sizes and industries told the same story –  digital technology is here to stay. 98% of those interviewed said digital technology has had an impact on their company. 65% consider that it has had a strong impact, while 11% feel that it has had a minor impact, if any. They report that all company processes have been affected: customer experience, supply chain management, administrative process management, the way products and services are developed, the way work is organized, the approach to managing teams, and employee experience.

Employees have a relatively high digital maturity level

Far from being pessimistic about the impact digital technology has had on their job, 84% of the employees say that it is making their work more interesting or that it has not had a major impact on their work. When asked to choose a response that describes their relationship with digital technology, 43% chose a very positive response, 47% chose a neutral response, while 11% chose a negative response. Another significant finding was that 40% hope their companies will go further in their digital transformation while 13% do not.

This overall positive perception of digital technology contributes to employees’ digital maturity level. This level can be defined as an employee’s overall relationship with digital technology at work, their perceived level of proficiency with digital technology, and the extent to which they use it to carry out their work activities.

As far as perceived level of proficiency is concerned, 61% consider that they have a very high or high level of proficiency, whereas 6% say that they have a low or very low level. At the same time, they are aware of their potential for improvement. 62% of respondents felt that they wasted time every day due to a lack of proficiency in certain digital skills.

A final point of interest is that the results also confirm that a new type of employee has emerged, one who can be called a ‘self HR employee,’ as identified in the first edition of this study. These ‘self HR’ employees take advantage of digital technology for a wide variety of purposes, including to learn independently and develop their skills, which means that they engage in informal digital learning. They also implement strategies to enhance their employee brand or even sell their skills.

 

       French employees are optimistic about the digital transformation! (Emmanuel Baudoin on Xerfi canal, 2019).

[divider style=”normal” top=”20″ bottom=”20″]

Emmanuel Baudoin, Associate Professor in HR, Institut Mines-Télécom Business School

The original version of this article (in French) was published on The Conversation under a Creative Commons license. Read the original article.

women

Why have girls been turning their backs on computer science?

Column written in partnership with The Conversation.
By Chantal Morley from Institut Mines-Télécom Business School.

[divider style=”normal” top=”20″ bottom=”20″]

[dropcap]F[/dropcap]or an organizer of a careers fair or an engineering school open day, the gender-bias in digital professions is clear. Boys are far more likely than girls to flock to stands about computer science or other digital professions. It is easy to assume that this is how things have always been. Our opinions are based on the stereotypes we see every day, with ‘geeks’ almost always being portrayed as male.

It is easy to forget that many female mathematicians played a key role in the history of computing. Examples of these incredible women are Grace Hopper, who developed the theory behind the compiler, and made the first ever prototype of the machine in 1952; Mary Keller, the first person to earn a doctorate in computer science in 1965.

Kathleen Booth, who was a pioneer in character recognition and machine translation; and Barbara Liskov, who programmed one of the first chess games in 1968 and became a professor of artificial intelligence at MIT.

Research and “coding stations”

After leading pioneering research, Adele Goldberg developed the first graphical interface computer in 1973, while Alice Recoque designed the Mitra line of minicomputers, including the Mitra 15, which was a huge commercial success. It is important to mention that MIT, in its pursuit of academic excellence, had an extremely forward-thinking policy focused on embracing diversity. This meant that between 1965 and 1985, the number of female computer science students rose from 5% to nearly 30%.

During these pioneering years, women were also very much involved at an operational level. In the 1940s, the first electronic computer, ENIAC, was fully programmed by six female mathematicians. In the early 1950s, 40% of programmers at computer manufacturer Eckert-Mauchly were women. Until 1960, in Great Britain’s public service, computer “coding stations” were almost exclusively “manned” by women.

In 1955, a team of four female programmers started the first computer services company (Computer Usage Company) started in the United States. Three years later, Elsie Schutt founded Computations Inc., which allowed mothers to pursue a career in computer science by working from home. This company lasted 50 years. In 1962, in Great Britain, Stephanie Shirley founded Freelance Programmers at the age of 29 with the same aim. The company was an international success until it was bought by Steria in 2007.

The turn of the 80s

So, how did the situation change? What caused computer science to become a male-dominated sector? The first turning point came in the 1980s and was due to recruitment procedures. To make it easier to whittle down the large number of job applicants they were receiving, an American company defined the psychological profile of a ‘good programmer’.

The profile was based on a sample of men working in a military environment and had two major characteristics: someone who was slightly less social than the average person, and someone who enjoyed typically ‘masculine’ activities.  This profile was far from what companies were looking for in the 1940s and 1950s, when they recruited patient, logical, imaginative people who liked crossword puzzles, played chess or used knitting patterns! The profile was widely used.

Secondly, as the need for computing staff increased, salaries became relatively high. At the time, female coders were not paid well, and it was unthinkable for a woman to supervise teams of both men and women. This caused the leading computing employer in Great Britain, the public sector, to stop recruiting competent, experienced and motivated female programmers in favor of men. This ultimately led to computer science becoming male-dominated.

The third factor was a takeover by the academic world, working hand in hand with the industry and excluding women. In 1968, world-leading computer science companies began to understand the importance of software in relation to hardware. This led them to organize a conference sponsored by NATO which brought together global programming specialists. However, no women, not even Grace Hopper or Jean Sammet, were invited to take part.

As well as this, universities encouraged industrialists to talk about ‘software engineering’ and ‘software engineers’ to make the career path sound more respectable. However, the term ‘engineer’  made computer science courses sound traditionally masculine.

On the other hand, the main American professional association (DPMA, Data Processing Management Association), which was overwhelmingly made up of men, created a professional skills qualification for both men and women to improve skills in the sector.  However, due to the hours needed for study and the unequal sharing out of housework between men and women, the qualification was still less accessible to women.

The influence of popular culture

In 1965, women represented 30% of the workforce in programming in the USA. In 1982, 35% of computer science jobs in France were held by women. From the late 1960s, computers gradually became widely used in society and carved out a presence in popular culture. However, this was often done in ways which did not allow a place for women. In Stanley Kubrick’s film, 2001: A Space Odyssey, the computer has a male voice and the relationship between humans and computers leaves no room for women.

In computer advertisements in the late 1970s, the target consumers were executives; in France, 80% of executives were male in 1985. Computers were marketed to men in a practical way, with the idea that they could work from home. But, to a large extent, they were also marketed as something fun.  At home, PCs were also not used equally. Men spent more time on them than women, sons had more access to them than daughters. Often, this mentality was passed down from their fathers.

When people started buying personal computers for children, for a long-time, boys were the main target audience and girls were sidelined. This portrayal of computers in society gradually spread and contributed to computers being viewed as a ‘masculine’ thing. All the more so as the free software movement, whose communities regularly exclude women, has constructed the figure of the hacker as the model developer.

Algorithms and bias

Finally, with the rise of social media and the generation of automatic advertisements, we have seen how algorithms reproduce gender bias, which reinforces discrimination against women in digital education.

All these factors have caused woman to gradually withdraw from computer science, and despite various initiatives, this trend has not been reversed. Since women’s role in computer science was never very visible, society has now forgotten the important role they once played.

If we look at countries which have equal numbers of men and women in their computer science sectors, such as India and Malaysia, as well as universities that have a permanent gender parity in these fields, such as CMU, NTNU and Harvey Mudd College, we can see how inclusive policies at a local level have enabled women to regain their place in the digital professions.

Also read on I’MTech: In IT professions, diversity is all about inclusion, not exclusion

[divider style=”normal” top=”20″ bottom=”20″]

Chantal Morley, Professor in Information systems, Institut Mines-Télécom (IMT).

The original version of this article (in French) was published in The Conversation under Creative Commons licence. Read the original article.

 

top ten

Our top ten articles of 2019 !

At this beginning of the year 2020, I’MTech takes a look back at 10 of the most noteworthy articles from the past year. What scientific topics made headlines at the close of the 2010s and the dawn of the 2020s? A look at this pivotal year, in which unsurprisingly, AI and the environment feature prominently… but not exclusively!

 

#1 Some people are paid fractions of cents to train algorithms. On social media sites, we all do it for free

#2 It’s leaving the beach and coming to cities. After air and water, sand is the third most-used resource for human beings.

#3 Do we want AI to be effective or to be able to understand what it’s doing?

#4 In rivers or on land, some plants are our best allies in the fight against heavy metals.

#5 Composite materials offer many advantages: weight, lifespan, mechanical performance etc. But for the time being, they also have one major drawback: their ability to be recycled.

#6 We no longer simply buy a product, but the services that go with it too. Companies clearly understand this.

#7 Water stress, impact on biodiversity, natural land transformation etc. Calculating the environmental footprint of a product or service is no simple task.

#8 Between optimizing sales and helping consumers, how AI is slowly making its way into our supermarkets.  

#9 Autonomous cars will soon be on our roads. But we still have to get them to communicate with one another!

#10 Do mobile applications and web services really help patients live better with their illnesses?

MADEin4

MADEin4: digital twinning and predictive maintenance for industry

Projets européens H2020The European MADEin4 project was launched in April 2019 for a three-year period. It aims to help semiconductor manufacturers and equipment suppliers play an active role in the continuous improvement of their equipment. How? By relying on new digital twinning and predictive maintenance technologies. Agnès Roussy and Valéria Borodin, research professors at Mines Saint-Étienne, a member of the MADEin4 project, explain the context that gave rise to this project and discuss the scientific hurdles to overcome.   

 

What was the context for developing the MADEin4 project?

Agnès Roussy: The MADEin4 project (Metrology Advances for Digitized ECS Industry 4.0) is an ECSEL project (Electronic Components and Systems for European Leadership). Its aim is to support and bring together the semiconductor industry in Europe in the transition to digital technology.

What is the overall goal of this project?

Valéria Borodin: To increase production output without affecting reliability levels in the manufacturing of electronic devices, the quality of which must comply with the increasingly demanding requirements of the highly competitive semiconductors market.

And how are you going about this?

AR: In order to improve productivity and facilitate the integration of digital technology into the organization of manufacturing processes for semiconductor and equipment manufacturers, going beyond the state of the art, the project will rely on an Industry 4.0 approach. To do so, two complementary boosters will be leveraged in the development of a pilot line: a physical accelerator based on next-generation metrology and inspection equipment for the microelectronics industry; and a digital accelerator – the digital twin (see box) – integrating artificial intelligence technology to improve output and equipment performance prediction.

[box type=”shadow” align=”” class=”” width=””]

Plus, loupeThe technique of digital twinning is used by manufacturers to monitor the operational status of their equipment (manufacturing, metrology, inspection). Digital twins of physical equipment are used. They evolve over time using data collected by sensors to measure the health status of equipment in order to prevent or anticipate breakdowns.[/box]

What technological and scientific challenges do you face?

VB: The development of digital twins and simulation models for managing and coordinating a production unit at different levels of decision-making poses a number of challenges, in particular, in terms of consistency of digital technology and decision-making across all industrial activities. In this regard, the help and expertise of semiconductor manufacturers and equipment suppliers (manufacturing and metrology) play a pivotal role in confirming the usefulness and industrial feasibility of the solutions we propose as academics.

How are Mines Saint-Étienne researchers contributing to the project?

AR: One of the research areas, in which Mines Saint Étienne’s Manufacturing and Logistics Sciences department (SFL) is primarily active, focuses on microelectronic manufacturing. This involves advanced process control, quantitative management of operations in the manufacturing process, and decision support at different levels (operational, tactical and strategic). As part of the MADEin4 project, we seek to explore opportunities and identify the limitations of new digital technologies in the intensive use and analysis of the massive quantities of data collected by inspection and metrology equipment.

Who are you partners for this project, and which collaborations are important for your work?

VB: The MADEin4 project brings together the expertise of 42 industrial and academic partners from 11 countries. Our key industrial partners for this project are STMicroelectronics in Rousset and Crolles. This project, among others, allows us to continue the long-standing, successful academic collaboration between the Manufacturing and Logistics Sciences Department at Mines Saint Etienne’s Provence Microelectronics Center (CMP) and the ST sites of Rousset and Crolles, who we’ve worked with for over 15 years. Many equipment suppliers are also involved in this project, so we’ll have the opportunity to work with them more closely on the equipment. And likewise for the academic partners involved: this European project will help foster new opportunities for collaboration through PhD theses or future calls for projects.

What are the expected benefits?

AR: The expected benefits of the MADEin4 project closely reflect the scientific and strategic priorities of Mines Saint-Etienne and the Provence Microelectronics Center (CMP), which promote a number of important topics: the industry of the future (Industry 4.0) and artificial intelligence (IA). Through the MADEin4 project, we seek to provide process control solutions for semiconductor manufacturers, explore opportunities for applications of digital twinning technology, strengthen the partnership with semiconductor manufacturers, and increase international recognition for the CMP on topics related to microelectronic manufacturing.

What are the important steps coming up for the project?

VB: The MADEin4 project started just over six months ago. This initial phase is exciting because everything seems to be possible. As for Mines Saint Étienne, the industrial data soon to be provided by the different partners will allow us to compare our research to the realities of industry. By the end of the first year, the research findings will be publicized through articles in international journals and presentations to the scientific communities involved.

Find out more about the MADEin4 project

codling moth

A tribe of irreducible codling moths

In agriculture, biological alternatives to pesticides are being sought for environmental and health reasons. Use of a virus as a biocontrol agent for crop pests has become relatively widespread. One such example is Cydia pomonella granulovirus. It has been used for decades to target a crop pest that is fond of apples and pears: the codling moth. Miguel Lopez-Ferber, a researcher at IMT Mines Alès, has been working on this topic since 2005, the year in which the larvae of this insect developed resistance to a commercial product made from this granulovirus.

 

Cydia pomonella. Common name: codling moth. Adult moths measure approximately 18 millimeters long and are a brownish-gray color. Shortly after they hatch, larvae bore into orchard fruits and feed on them from the inside. The apples and pears are damaged and rot, so they are no longer suitable to be sold. However, these insects are very susceptible to a virus: their granulovirus, which is known as CpGV. If they come into contact with it, the larvae become ill and die, leaving the orchards healthy and untouched.

Unfortunately for apple trees, codling moths have developed resistance to CpGV, which poses a number of problems. For one, farmers need to find a new way to protect their orchards. And manufacturers need to determine whether it is possible to improve the viral preparation, find a substitute, or if production must be stopped. Then there’s the scientific question: “Codling moths have been in contact with this virus for millions of years and have never developed resistance to it. Why now?” wonders Miguel Lopez-Ferber, a researcher at IMT Mines Alès: “If there had been widespread resistance in the past, we would no longer find this virus in nature.” 

One possible explanation is that, “we’ve underestimated the inventiveness of nature,” says Miguel Lopez-Ferber. “We’ve acted as if it were a chemical product: for years, exactly the same viral solution has been spread over orchards.” In nature, when an insect repeatedly comes into contact with the same chemical product, it will adapt and find a way to resist it. So the product will no longer work as well. Viruses, on the other hand, will also adapt and find new ways to reach the insects if we don’t prevent them from doing so – they are in co-evolution.

“It works the same way with humans, with the flu virus, for example,” explains the researcher. “We develop defenses to protect ourselves from the virus, but it evolves and comes back stronger the next year.” And CpGV exists in different forms throughout the world. There are slight variations in genotype – which is the sum total of an individual’s genes. And the solution available on the market corresponds to the culmination of research on a single isolated genotype of this virus.

Research to overcome resistance

With CpGV, the same virus isolate has been applied massively for years. This means that it’s possible that codling moth larvae are not resistant to other isolates of the virus. The different genotypes of the virus have been divided into 5 groups, from A to E. The “A” group is most similar to the Mexican isolate, which has been used historically. The researchers found that the other groups infected resistant larvae. At the beginning, however, the other viral isolates were less effective than those originally used – a greater quantity of product was needed for the same plot of land. But with a little selection, the performance reached the same levels as that of the original product. “We were also worried that we would observe resistance to these new genotypes in the future,” says Miguel Lopez-Ferber. But it is likely that this resistance phenomenon will not appear if there is greater viral diversity.

The researchers therefore tried another method: they combined the Mexican isolate to which the larvae had become resistant, with another genotype of the virus that infected the larvae. And they discovered that together, the two were even more effective in infecting the larvae. In a way, the second genotype, “opened the door to the one that had been blocked until then,” explains Miguel Lopez-Ferber, “but we still don’t really understand how it works.” The researchers are therefore studying how the different forms of the virus interact with one another to infect larvae. They could then use this knowledge to develop one or several optimal mixtures, by appropriately combining the specific characteristics of each genotype.

“Viral diversity is an asset, but we don’t yet fully understand how it works,” explains the researcher. “Imagine, for example, if we wanted to colonize a desert island. If all the colonists were miners, there’d be a lack of skills for building houses, planting crops etc. We need different, complementary jobs. That’s what we get when we combine several genotypes of a virus – except we don’t really know what their jobs are. We just know they work better together.”

And studying the virus’s behavior in codling moth larvae is no simple task. Let’s say that a type A and type B virus are dispersed in an orchard. How can we determine if a larva has absorbed both forms of the virus or only a single form? Or, if one of them has prevented the other from developing? In order to understand how the tasks are distributed, the researchers would must able to track A and B as they colonize the larva. The molecular tools available today are not the best suited for this. Miguel Lopez-Ferber’s team is currently working in partnership with NeoVirTech, a Toulouse-based company, to develop a better technique for tracking viruses.

[box type=”info” align=”” class=”” width=””]

The origins of granulovirus

Using a natural predator to protect our harvests is not a new idea,” says Miguel Lopez-Ferber. “We domesticated cats to combat mice. It’s the same principle with granulovirus.”

It was in Mexico in 1964 that the codling moth granulovirus (CpGV) was discovered. Codling moth larvae were found dead and researchers sought to determine the cause. They then isolated the virus responsible: the Mexican isolate of CpGV. Shortly after, other forms of the virus were observed in Russia, followed by the United Kingdom. Slight variations existed between the viruses, subtle differences in genotype – the sum total of an individual’s genes. The Mexican isolate was favored for a potential commercial product because it was more homogenous. This made it easier to characterize in order to comply with regulatory criteria for insecticides, which are equivalent for phytosanitary and biological products and require rigorous identification of a product’s makeup.  

After 25 years of research, the viral solution was ready for use and authorized for sale. In practice, it is used like a chemical product. A concentrated liquid solution is prepared in a tank and is then dispersed in a spray of fine droplets, ideally covering the entire orchard in a uniform manner. Starting in the 1990s, the product was widely used and applied several times a year. Until 2005, that is, when it was observed that codling moths were still present after the treatment.[/box]

Guillaume Balarac

Guillaume Balarac, turbulence simulator

Turbulence is a mysterious phenomenon in fluid mechanics. Although it has been observed and studied for centuries, it still holds secrets that physicists and mathematicians strive to unlock. Guillaume Balarac is part of this research community. A researcher at Grenoble INP (at the LEGI Geophysical and Industrial Flows Laboratory), he uses and improves simulations to understand turbulent flows better. His research has given rise to innovations in the energy sector. The researcher, who has recently received the 2019 IMT-Académie des Sciences Young Scientist Award, discusses the scientific and industrial challenges involved in his field of research.

 

How would you define turbulent flows, which are your research specialty?

Guillaume Balarac: They are flows with an unpredictable nature. The weather is a good example for explaining this. We can’t predict the weather more than five days out, because the slightest disturbance at one moment can radically alter what occurs in the following hours or days . It’s the butterfly effect. Fluid flows in the atmosphere undergo significant fluctuations that limit our ability to predict them. This is typical of turbulent flows, unlike laminar flows which are not subject to such fluctuations and whose state may be predicted more easily.

Apart from air mass movements in the atmosphere, where can turbulent flows be found?

GB: Most of the flows that we may encounter in nature are actually turbulent flows. The movement of oceans is described by turbulent flows, as is that of rivers. The movement of molten masses in the Sun generates a turbulent flow. This is also the case for certain biological flows in our bodies, like blood flow near the heart. Apart from nature, these flows are found in rocket propulsion, the motion  of wind turbines and that of hydraulic or gas turbines etc.

Why do you seek to better understand these flows?

GB: First of all, because we aren’t able to do so! It’s still a major scientific challenge. Turbulence is a rather uncharacteristic example – it has been observed for centuries. We’ve all seen a river or felt the wind. But the mathematical description of these phenomena still eludes us. The equations that govern these turbulent flows have been known for two centuries. And the underlying mechanics have been understood since ancient times.  And yet, we aren’t able to solve these equations and we’re ill-equipped to model and understand these events.

You say that researchers can’t solve the equations that govern turbulent flows. Yet, some weather forecasts for several days out are accurate…

GB: The iconic equation that governs turbulent flows is the Navier-Stokes equation. That’s the one that has been known since the 19th century. No one is able to find a solution with a pencil and paper. Finding a unique, exact solution to this equation is even one of the seven millennium problems established by the Clay Mathematics Institute.  As such, the person who finds the solution will be awarded $1 million. That gives you an idea about the magnitude of the challenge. To get around our inability to find this solution, we either try to approach it using computers, as is the case for weather forecasts  — with varying degrees of accuracy — or we try to observe it. And finding a link between observation and equation is no easy task either!

Beyond this challenge, what can a better understanding of turbulent flows help accomplish?

GB: There are a wide range of applications which require an understanding of these flows and the equations that govern them. Our ability to produce energy relies in part on fluid mechanics, for example. Nuclear power plants function with water and steam systems. Hydroelectric turbines work with water flows, as do water current turbines. For wind turbines, it’s air flows.  And these examples are only as far as the energy sector is concerned.

You use high-resolution simulation to understand what happens at the fundamental level in a turbulent flow. How does that work?

GB: One of the characteristics of turbulent flows are eddies. The more turbulent the flow, the more eddies of varying sizes it has. The principle of high resolution simulation is to define billions of points in the space in which the flow is produced, and calculate the fluid velocity at each of these points. This is called a mesh, and it must be fine enough to describe the smallest eddy in the flow. These simulations use the most powerful supercomputers in France and Europe. And even with all that computing power, we can’t simulate realistic situations – only academic flows in idealized conditions . These high-resolution simulations allow us to observe and better understand the dynamics of turbulence in canonical configurations.

Simulation des écoulements turbulents sur une hydrolienne.

Simulation of turbulent flows on a marine turbine.

Along with using these simulation tools, you work on improving them. Are the two related?

GB: They are two complementary approaches. The idea for that portion of my research is to accept that we don’t have the computing power to simulate the Navier-Stokes equation in realistic configurations. So the question I ask myself is – how can this equation be modified so that it can be possible to solve with our current computers, while ensuring that the prediction is still reliable? The approach is to solve the big eddies first. And since we don’t have the power to make a fine enough mesh for the small eddies, we look for physical terms, mathematical expressions, which replace the influence of the small eddies on the big ones. That means that we don’t have the small eddies in this modeling, but their overall contribution to flow dynamics is taken into account. This helps us improve simulation tools by making them able to address flows in realistic conditions.

Are these digital tools you’re developing used solely by researchers?

GB: I seek to carry out research that is both fundamental and application-oriented. For example, we worked with Hydroquest, on the performance of water current turbines to generate electricity. The simulations we carried out made it possible to assess the performance loss due to the support structures, which do not contribute to capturing the energy from the flow. Our research led to patents for new designs, with a 50% increase in yield.

More generally, do energy industry players realize how important it is to understand turbulent flows in order to make their infrastructures more efficient?

GB: Of course, and we have a number of partners who illustrate industrial interest for our research.    For example, we’ve adopted the same approach to improve the design of floating wind turbines. We’re also working with General Electric on hydroelectric dam turbines. These hydraulic turbines are increasingly being used to operate far from their optimal operating point, in order to mitigate the intermittence of renewable solar or wind energy.  In these systems, hydrodynamic instability develops, which has a significant effect on the machines’ performance. So we’re trying to optimize the operation of these turbines to limit yield loss.

What scientific challenges do you currently face as you continue your efforts to improve simulations and our understanding turbulent flows?

GB: At the technical level, we’re trying to improve our simulation codes to take full advantage of advances in supercomputers. We’re also trying to improve our numerical methods and models to increase our predictive capacity.  For example, we’re now trying to integrate learning tools to avoid simulating small eddies and save computing time. I’ve started working with Ronan Fablet, a researcher at IMT Atlantique, on precisely this topic. Then, there’s the huge challenge of ensuring the reliability of the simulations carried out. As it stands now, if you give a simulation code to three engineers, you’ll end up with different models. This is due to the fact the tools aren’t objective, and a lot depends on the individuals using them. So we’re working on mesh and simulation criteria that are objective. This should eventually make it possible for industry players and researchers to work with the same foundations,  and better understand one another when discussing turbulent flows.

 

Véronique Bellon-Maurel

Véronique Bellon-Maurel: from infrared spectroscopy to digital agriculture

Measuring and quantifying have informed Véronique Bellon-Maurel’s entire scientific career. A pioneer in near infrared spectroscopy, the researcher’s work has ranged from analyzing fruit to digital agriculture. Over the course of her fundamental research, Véronique Bellon-Maurel has contributed to the optimization of many industrial processes. She is now the Director of #DigitAg, a multi-partner Convergence Lab, and is the winner of the 2019 IMT-Académie des Sciences Grand Prix. In this wide-ranging interview, she retraces the major steps of her career and discusses her seminal work.   

 

You began your research career by working with fruit. What did this research involve?

Véronique Bellon-Maurel: My thesis dealt with the issue of measuring the taste of fruit in sorting facilities. I had to meet industrial requirements, particularly in terms of speed: three pieces of fruit per second! The best approach was to use near infrared spectroscopy to measure the sugar level, which is indicative of taste. But when I was beginning my thesis in the late 1980s, it took spectrometers one to two minutes to scan a piece of fruit. I suggested working with very near infrared, meaning a different type of radiation than the infrared that had been used up to then, which made it possible to use new types of detectors that were very fast and inexpensive.

So that’s when you started working on near infrared spectroscopy (NIRS), which went on to became your specialization. Could you tell us what’s behind this technique with such a complex name?

VBM: Near infrared spectroscopy (NIRS) is a method for analyzing materials. It provides a simple way to obtain information about the chemical and physical characteristics of an object by illuminating it with infrared light, which will pass through the object and become charged with information. For example, when you place your finger on your phone’s flashlight, you’ll see a red light shining through it. This light is red because the hemoglobin has absorbed all the other colors of the original light. So this gives you information about the material the light has passed through. NIRS is the same thing, except that we use particular radiation with wavelengths that are located just beyond the visible spectrum.

Out of all the methods for analyzing materials, what makes NIRS unique?

VBM: Near infrared waves pass through materials easily. Much more easily than “traditional” infrared waves which are called “mid-infrared.” They are produced by simple sources such as sunlight or halogen lamps. The technique is therefore readily available and is not harmful: it is used on babies’ skulls to assess the oxygenation saturation of their brains! But when I was starting my career, there were major drawbacks to NIRS. The signal we obtain is extremely cluttered because it contains information about both the physical and chemical components of the object.

And what is hiding behind this “cluttered signal”?

VBM: In concrete terms, you obtain hill-shaped curves and the shape of these curves depends on both the object’s chemical composition and its physical characteristics. You’ll get a huge hill that is characteristic of water. And the signature peak of sugar, which allows you to calculate a fruit’s sugar level, is hidden behind it. That’s the chemical component of the spectrum obtained. But the size of the hills also depends on the physical characteristics of your material, such as the size of the particles or cells that make it up, physical interfaces — cell walls, corpuscles — the presence of air etc. Extracting solely the information we’re interested in is a real challenge!

Near infrared spectrums of apples.

 

One of your earliest significant findings for NIRS was precisely that – separating the physical component from the chemical component on a spectrum. How did you do that?

VBM: The main issue at the beginning was to get away from the physical component, which can be quite a nuisance. For example, light passes through water, but not the foam in the water, which we see as white, even though they are the same molecules! Depending on whether or not the light passes through foam, the observation — and therefore the spectrum — will change completely. Fabien Chauchard was the first PhD student with whom I worked on this problem. To better understand this optical phenomenon, which is called diffusion, he went to the Lund Laser Center in Sweden. They have highly-specialized cameras: time-of-flight cameras, which operate at a very high speed and are able to capture photos “in flight.” We send photons onto a fruit in an extremely short period of time and we recover the photons as they come out since not all of them come out at the same time. In our experiments, if we place a transmitter and a receiver on a fruit spaced 6 millimeters apart, when they came out, certain photons had travelled over 20 centimeters! They had been reflected, refracted, diffracted etc. inside the fruit. They hadn’t travelled in a straight line at all. This gave rise to an innovation, spatially resolved spectroscopy (SRS) developed by the Indatech company that Fabien Chauchard started after completing his PhD.

We looked for other optical arrangements for separating the “chemical” component from the “physical” component. Another PhD student, Alexia Gobrecht, with whom I worked on soil, came up with the idea of using polarized near infrared light. If the photons penetrate the soil, they lose their polarization. Those that have only travelled on the surface conserve it. By differentiating between the two, we recover spectrums that only depend on the chemical component. This research on separating chemical and physical components was continued in the laboratory, even after I stopped working on it. Today, my colleagues are very good at identifying aspects that have to do with the physical component of the spectrum and those that have to do with to the chemical component. And it turns out that this physical component is useful! And to think that twenty years ago, our main focus was to get rid of it.

After this research, you transitioned from studying fruit to studying waste. Why did you change your area of application?

VBM: I’d been working with the company Pellenc SA on sorting fruit since around 1995, and then on detectors for grape ripeness. Over time, Pellenc transitioned to waste characterization for the purpose of sorting, based on the infrared knowledge developed through sorting fruit. They therefore called on us, with a new speed requirement, but this one was much tougher. A belt conveyor moves at a speed of several meters per second. In reality, the areas of application for my research were already varied. In 1994, while I was still working on fruit with Pellenc, I was also carrying out projects for biodegradable plastics. NIRS made it possible to provide quality measurements for a wide range of industrial processes. I was Ms. “Infrared sensors!”

 

“I was Ms. ‘Infrared sensors’!”
– Véronique Bellon-Maurel

 

Your work on plastics was among the first in the scientific community concerning biodegradability. What were your contributions in this area?

VBM: 1990 was the very beginning of biodegradable plastics. Our question was determining whether we could measure a plastic’s biodegradability in order to say for sure, “this plastic is truly biodegradable.” And to do so as quickly as possible, so why not use NIRS? But first, we had to define the notion of biodegradability, with a laboratory test. For 40 days, the plastics were put in reactors in contact with microorganisms, and we measured their degradation. We were also trying to determine whether this test was representative of biodegradability in real conditions, in the soil. We buried hundreds of samples in different plots of land in various regions and we dug them up every six months to compare real biodegradation and biodegradation in the laboratory. We wanted to the find out if the NIRS measurement was able to achieve the same result, which was estimating the degradation kinetics of a biodegradable plastic – and it worked. Ultimately, this benchmark research on the biodegradability of plastics contributed to the industrial production and deployment of the biodegradable plastics that are now found in supermarkets.

For that research, was your focus still on NIRS?

VBM: The crux of my research at that time was the rapid, non-destructive characterization — physical or chemical— of products. NIRS was a good tool for this. We used it again after that on dehydrated household waste in order to assess the anaerobic digestion potential of waste. With the laboratory of environmental biotechnology in Narbonne, and IMT Mines Alès, we developed a “flash” method to quickly determine the quantity of bio-methane that waste can release, using NIRS. This research was subsequently transferred to the Ondalys company, created by Sylvie Roussel, one of my former PhD students. My colleague Jean-Michel Roger is still working with them to do the same thing with raw waste, which is more difficult.

So you gradually moved from the agri-food industry to environmental issues?

VBM: I did, but it wasn’t just a matter of switching topics, it also involved a higher degree of complexity. In fruit, composition is restricted by genetics – each component can vary within a known range. With waste, that isn’t the case! This made environmental metrology more interesting than metrology for the food industry. And my work became even more complex when I started working on the topic of soil. I wondered whether it would be possible to easily measure the carbon content in soil. This took me to Australia, to a specialized laboratory at the University of Sydney. To my mind, all this different research is based on the same philosophy: if you want to improve something, you have to measure it!

So you no longer worked with NIRS after that time? 

VBM: A little less, since I changed from sensors to assessment. But even that was a sort of continuation: when sensors were no longer enough, how could we make measurements? We had to develop assessment methods. It’s very well to measure the biodegradability of a plastic, but is that enough to successfully determine if that biodegradable plastic has a low environmental impact? No, it isn’t – the entire system must be analyzed. I started working on life-cycle analysis (LCA) in Australia after realizing that LCA methods were not suited to agriculture: they did not account for water, or notions of using space. Based on this observation, we improved the LCA framework to develop the concept of a regional LCA, which didn’t exist at the time, allowing us to make an environmental assessment of a region and compare scenarios for how this region would evolve. What I found really interesting with this work was determining how to use data from information systems and sensors to build the most reliable and reproducible model as possible. I wanted the assessments to be as accurate as possible. This is what led me to my current field of research – digital agriculture.

Read more on I’MTech: The many layers of our environmental impact

In 2013 you founded #DigitAg, an institute dedicated to this topic. What research is carried out there?

VBM: The “Agriculture – Innovation 2025” report submitted to the French government in 2015 expresses a need to structure French research on digital agriculture. We took advantage of the opportunity to create Convergence Labs by founding the #DigitAg, Digital Agriculture Convergence Lab. It’s one of ten institutes funded by the Investments in the Future program. All of these institutes were created in order to carry out interdisciplinary research on a major emerging issue. At #DigitAg, we draw on engineering sciences, digital technology, biology, agronomy, economy, social sciences, humanities, management etc. Our aim is to establish knowledge bases to ensure that digital agriculture develops in a harmonious way. The challenge is to develop technologies but also to anticipate how they will be used and how such uses will transform agriculture – we have to predict how technologies will be used and the impacts they will have to help ensure ethical uses and prevent misuse. To this end, I’ve also set up a living lab, Occitanum — for Occitanie Digital Agroecology — set to start in mid-2020. The lab will bring together stakeholders to assess the use value of different technologies and understand innovation processes. It’s a different way of carrying out research and innovation, by incorporating the human dimension.

Fukushima: 8 years on, what has changed in France?

Fukushima was the most devastating nuclear disaster since Chernobyl. The 1986 disaster led to radical changes in international nuclear governance, but has the Japanese catastrophe had the same effect? This is what the AGORAS project is trying to find out. IMT Atlantique, the IRSN, Mines ParisTech, Orano and SciencesPo are all working on the AGORAS project, which aims to understand the impact of Fukushima on France’s nuclear industry. Stéphanie Tillement, a sociologist at IMT Atlantique explains the results of the project, which is coming to an end after 6 years of research.

 

Why do we need to know about the consequences of a Japanese nuclear incident in France?

Stéphanie Tillement: Fukushima was not just a shock for Japan. Of course, the event influenced everywhere that uses nuclear energy as an important part of energy production, such as Europe, North America, and Russia; but it also affected less nuclearized countries. Fukushima called into question the safety, security and reliability of nuclear power plants. Groups which are strongly involved in the industry, such as nuclear operators, counter-experts, associations and politicians, were all affected by the event. Therefore, we expected that Fukushima would have a strong impact on nuclear governance. There is also another, more historical, reason; both the Chernobyl and Three Mile Island accidents had an impact on the organization of the nuclear industry. So, Fukushima could be part of this trend.

How did Chernobyl and Three Mile Island impact the industry?

ST: The consequences of nuclear disasters are generally felt 10 to 20 years after the event itself. In France, Chernobyl contributed to the 2006 French Nuclear Safety and Transparency Act, which marked a major change in the nuclear risk governance system. This law notably led to the creation of the French Nuclear Safety Authority, ASN. A few years earlier, the French Radioprotection and Nuclear Safety Institute, IRSN, was created. The 2006 law still regulates the French nuclear industry today. The Three Mile Island disaster caused the industry to question people’s involvement in these complex systems, notably in terms of human error. This led to major changes in human-computer interfaces within nuclear infrastructure, and the understanding of human error mechanisms.

Has the Fukushima accident led to similar changes?

ST: The disaster was in 2011; it’s not even been 10 years since it happened. However, we can already see that Fukushima will probably not have the same affect in France as the other accidents. Rather than criticizing the French system, industry analysis of Fukushima has emphasized the benefits of France’s current mode of governance. Although technical aspects have undergone changes, particularly regarding Complementary Safety Assessments (CSR), the relationships between nuclear operators, the ASN and the IRSN have not changed after Fukushima.

Why has this disaster not considerably affected the French mode of governance?

ST: At first, the French nuclear industry thought that the Fukushima disaster was unlikely to happen in France, as the Japanese power plant was managed in a completely different way. In Japan, several operators share the country’s nuclear power plants. When analyzing crisis management, the post-accident report showed that the operator’s independence was not enforced, and that there was collusion between the government, the regulators and the operators. In France, the Nuclear Safety and Transparency Act strictly regulates relationships between industry operators and assures that each operator has their independence. This is a strength of the French governance model that is recognized internationally. As well as this, French nuclear power plants are managed by only one operator, EDF, which controls 58 identical plants. The governance issues in Japan reassured French operators, as they confirmed that legally enforcing the independence of the regulatory authority was the right thing to do.

How did the anti-nuclear movement respond to this lack of change?

ST: During our investigations into Fukushima, we realized that the accident did not create any new anti-nuclear movements or opinions. Opposition already existed. There is no denying that the event gave these anti-nuclear organizations, collectives and experts some material, but this didn’t radically change their way of campaigning nor their arguments. This again shows how Fukushima did not cause major changes. The debate surrounding the nuclear industry is still structured in the same way as it was before the disaster.

Does that also mean that there have been no political consequences post-Fukushima?

ST: No, and that’s also one of the findings of the AGORAS project. Recent political decisions on nuclear sector strategy have been mainly made according to processes established before the Fukushima accident. For example, the cancellation of the ASTRID project was not due to a radical political change in the nuclear sector, but actually because of economic arguments and a lack of political desire to tackle the subject. Clearly, politicians do not want to tackle these issues, as the decisions they make have an impact in 10, 20, or even 30 years’ time. This just doesn’t work with their terms of office. The political turnover also means that very few politicians know enough about the subject, which raises questions about the government’s ability to get involved in nuclear, and therefore energy politics.

Read on I’MTech: What nuclear risk governance exists in France?

Your work suggests that there has been almost no change in any aspect of nuclear governance

ST: The AGORAS project started by asking the question: Did Fukushima cause a change in governance in the same way as the accidents that preceded it? If we look at it from this perspective, our studies say no, due to all the reasons that I’ve already mentioned. However, we need to put this into context. Many things have changed, just not in the same radical way as they did after Chernobyl or Three Mile Island. Amongst these changes, is the modification of certain technical specifications for infrastructure. For example, one of the reasons why ASN called for EDF to review the welding of their EPR reactors was due to technical developments decided following Fukushima. There have also been changes in crisis management and post-accident management.

How have we changed the way we would manage this type of disaster?

ST: Following Fukushima, a rapid response force for nuclear accidents (FARN) was created in France to manage the emergency phase of an accident. Changes were also made to the measures taken during a crisis, so that the civil security and prefects can act more quickly. The most notable changes have been in the post-accident phase. Historically, accident preparation measures were mainly focused on the emergency phase. As a result, different roles are well-defined in this phase. However, Fukushima showed that managing the after-crisis was also equally as important. What’s unique about a nuclear accident, is that it has extremely long-term consequences. However, in Fukushima, once the emergency phase was over, the organization became less defined. No one knew who was responsible for controlling food consumption, soil contamination, or urban planning. Therefore, the local information commissions (CLIS) have worked with nuclear operators to improve the post-accident phase in particular. But, once again, our research has shown that this work was started before the Fukushima disaster. The accident just accelerated these processes and increased the importance of this issue.

Fukushima took place less than 10 years ago; do you plan on continuing your work and studying the implications of the disaster after 10 and 20 years have passed?

ST: We would particularly like to continue to address other issues and to develop our results further. We have already carried out field research with ASN, IRSN, local information commissions, politicians, associations, and manufacturers such as Framatome or Orano. However, one of the biggest limitations to our work is that we cannot work with EDF, who is a key player in nuclear risk governance. In the future, we want to be able to work with plant operators, so we can study the impact of an accident on their operations. As well as this, politicians’ understanding could also be improved.  Understanding politicians’ opinions regarding nuclear governance, and the nuclear strategy decision-making process is a real challenge.

lithium-ion battery

What is a lithium-ion battery?

The lithium-ion battery is one of the best-sellers of recent decades in microelectronics. It is present in most of the devices we use in our daily lives, from our mobile phones to electric cars. The 2019 Nobel Prize in Chemistry was awarded to John Goodenough, Stanley Wittingham, and Akira Yoshino, in recognition of their initial research that led to its development. In this new episode of our “What’s?” series, Thierry Djenizian explains the success of this component. Djenizian is a researcher in microelectronics at Mines Saint-Étienne and is working on the development of new generations of lithium-ion batteries.

 

Why is the lithium-ion battery so widely used?

Thierry Djenizian: It offers a very good balance between storage and energy output. To understand this, imagine two containers: a glass and a large bottle with a small neck. The glass contains little water but can emptied very quickly. The bottle contains a lot of water but will be slower to empty. The electrons in a battery behave like the water in the containers. The glass is like a high-power battery with a low storage capacity, and the bottle a low-power battery with a high storage capacity. Simply put, the lithium-ion battery is like a bottle but with a wide neck.

How does a lithium-ion battery work?

TD: The battery consists of two electrodes separated by a liquid called electrolyte. One of the two electrodes is an alloy containing lithium. When you connect a device to a charged battery, the lithium will spontaneously oxidize and release electrons – lithium is the chemical element that releases electrodes most easily. The electrical current is produced by the electrons flowing between the two electrodes via an electrical circuit, while the lithium ions from the oxidation reaction migrate through the electrolyte into the second electrode.

The lithium ions will thus be stored until they no longer have any available space or until the first electrode has released all its lithium atoms. The battery is then discharged and you simply apply a current to force the reverse chemical reactions and have the ions migrate in the other direction to return to their original position. This is how lithium-ion technology works: the lithium ions are inserted into and extracted from the electrodes reversibly depending on whether the battery is charging or discharging.

What were the major milestones in the development of the lithium-ion battery?

TD: Wittingham discovered a high-potential material composed of titanium and sulfur capable of reacting with lithium reversibly, then Goodenough proposed the use of metal alloys. Yoshino marketed the first lithium-ion battery using graphite and a metal oxide as electrodes, which considerably reduced the size of the batteries.

What are the current scientific issues surrounding lithium-ion technology?

TD: One of the main trends is to replace the liquid electrolyte with a solid electrolyte. It is best to avoid the presence of flammable liquids, which also present risks of leakage, particularly in electronic devices. If the container is pierced, this can have irreversible consequences on the surrounding components. This is particularly true for sensors used in medical applications in contact with the skin. Recently, for example, we developed a connected ocular lens with our colleagues from IMT Atlantique. The lithium-ion battery we used included a solid polymer-based electrolyte because it would be unacceptable for the electrolyte to come into contact with the eye in the event of a problem. Solid electrolytes are not new. What is new is the research work to optimize them and make them compatible with what is expected of lithium-ion batteries today.

Are we already working on replacing the lithium-ion battery?

TD: Another promising trend is to replace the lithium with sodium. The two elements belong to the same family and have very similar properties. The difference is that lithium is extracted from mines at a very high environmental and social cost. Lithium resources are limited. Although lithium-ion batteries can reduce the use of fossil fuels, if their extraction results in other environmental disasters, they are less interesting. Sodium is naturally present in sea salt. It is therefore an unlimited resource that can be extracted with a considerably lower impact.

Can we already do better than the lithium-ion battery for certain applications?

TD: It’s hard to say. We have to change the way we think about our relationship to energy. We used to solve everything with thermal energy. We cannot use the same thinking for electric batteries. For example, we currently use lithium-ion button cell batteries for the internal clocks of our computers. For this very low energy consumption, a button cell has a life span of several hundred years, while the computer will probably be replaced in ten years. A 1mm² battery may be sufficient. The size of energy storage devices needs to be adjusted to suit our needs.

Read on I’MTech: Towards a new generation of lithium batteries?

We also have to understand the characteristics we need. For some uses, a lithium-ion battery will be the most appropriate. For others, a battery with a greater storage capacity but a much lower output may be more suitable. For still others, it will be the opposite. When you use a drill, for example, it doesn’t take four hours to drill a hole, nor do you need a battery that will remain charged for several days. You want a lot of power, but you don’t need a lot of autonomy. “Doing better” than the lithium-ion battery, perhaps simply means doing things differently.

What does it mean to you to have a Nobel Prize awarded to a technology that is at the heart of your research?

TD:  They are names that we often mention in our scientific publications, because they are the pioneers of the technologies we are working on. But beyond that, it is great to see a Nobel Prize awarded to research that means something to the general public. Everyone uses lithium-ion batteries on a daily basis, and people recognize the importance of this technology. It is nice to know that this Nobel Prize in Chemistry is understood by many people.

healthcare

When engineering helps improve healthcare

Editorial.

 

Tomorrow’s medicine will be at least 4P: personalized, preventive, predictive, participative. ‘At least,’ because some would readily add “precise,” “proof” (evidence-based), “pathway-based” etc. Beyond naming this type of medicine and determining the correct number of Ps, medicine is clearly undergoing a profound change. A transformation supported in part by engineering, which is bringing major innovations to the healthcare industry. New technologies — whether in relation to digital technology or materials — have led to advances in many areas of medicine.

Researchers at Mines Saint-Étienne’s Centre for Biomedical and Health Engineering (CIS) are carrying out their research in the heart of the Saint-Étienne hospital campus. They are working to improve health systems and technology, in daily contact with medical professionals. Jérémie Pourchez is developing an artificial lung model to study certain respiratory diseases and the toxicity of inhaled particles. David Marchat is developing a new generation of bone implants to facilitate bone regeneration in grafts. As such, they are directly addressing needs for knowledge and tools expressed by practitioners.

[one_half]

[/one_half][one_half_last]

[/one_half_last]

Meanwhile, at IMT Lille Douai, Caroline Duc is developing an “artificial nose,” an electronic device that can monitor patients by analyzing their breath – an innovation which could eventually be used to help diagnose diseases.

Sometimes innovation comes from unexpected places. This is the case for a new type of liquid xenon scanner developed at IMT Atlantique. The technology developed by physicists was drawn directly from their efforts to search for dark matter. In the future, we may be able to detect the smallest diseased cells based on our observations of the most distant galaxies!

To learn more about new healthcare technologies and how they are impacting patients and practitioners, I’MTech suggests a selection of our archives on this topic: