Photographie montrant plusieurs blocs de béton

Improve the quality of concrete to optimize construction

Since the late 20th century, concrete has become the most widely used manufactured material in the world. Its high level of popularity comes alongside recurring problems, affecting its quality and durability. Among these problems is when one of the components in concrete, cement paste, sweats. Mimoune Abadassi, civil engineering PhD student at IMT Mines Alès, aims to resolve this problem.

“When concrete is still fresh, the water inside rises to the surface and forms condensation,” explains Mimoune Abadassi, doctoral student in Civil Engineering at IMT Mines Alès. This phenomenon is called concrete sweating. “When this process takes place, some of the water will not reach the surface and remains trapped inside the concrete, which can weaken the structure,” adds the researcher, before specifying that “sweating does not only have negative effects on the concrete’s quality, as water allows the material to be damp cured, which prevents it from drying out and cracks appearing that would reduce durability”.

In his thesis, Abadassi studies the sweating of cement paste, one of the components of concrete alongside sand and gravel. In analyzing cement paste prepared with varying amounts of water, the young researcher has remarked that the more water incorporated in the cement paste, the more it sweats. He has also looked into the effect of superplasticizers, chemical products that when included in the cement paste, make it more liquid, more malleable when fresh and more resilient when hardened. “When we increase the amount of superplasticizer, we have observed that the cement paste sweats more as well,” indicates Abadassi. “This is explained by the fact that superplasticizers disperse suspended cement particles and encourage the water contained in clusters formed by these particles to be released,” he points out, before adding that “this phenomenon causes the volume of water in the mixture to increase, which increases the sweating of the cement paste”.

Research at the nanometric, microscopic and macroscopic level

By interfering with the sweating, superplasticizers also affect the permeability of cement paste. To study its permeability when fresh, Abadassi used an oedometer, a device mainly used in the field of soil mechanics. Oedometers compress a sample, extract the water contained inside and measure the volume, to determine how permeable it is. The larger the volume of water recovered, the more permeable the sample. In the case of cement paste, if it is too permeable, more water will enter, which reduces cohesion between aggregate particles and weakens the material’s structure.

By varying certain parameters when preparing the cement paste, such as the amount of superplasticizer, Abadassi aims to observe the changes taking place within the paste, invisible to the naked eye. To do so, he uses a Turbiscan. This machine, generally used in the cosmetics industry, makes it possible to analyze particle dispersion and cluster structure in the near-infrared. By observing the sample at scales ranging from the nanometer to the millimeter, it is possible to identify the formation of flocks: groups of particles in suspension which adhere to one another, and that, in the presence of superplasticizers, separate and release water into the cement paste mixture.    

To understand the consequences of phenomena in cement paste at the microscopic and mesoscopic scale, Abadassi uses a scanning electron microscope. This method makes it possible to observe the paste’s microstructure and interfaces between aggregate particles at a nanometric and microscopic scale. “With this technique, I can visualize internal sweating, shown by the presence of water stuck between aggregate particles and not rising to the surface,” he explains. When concrete has hardened, a scanning microscope can be used to identify fissuring phenomena and cavity formation caused by the sweating paste.

Abadassi has also studied the effects of an essential stage in cement paste production: vibration. This process allows cement particles to be rearranged, leaving the smallest possible gaps between them and therefore making the paste more durable and compact. After vibrating the cement paste at various frequencies, Abadassi concluded that sweating is more likely at higher frequencies. “Vibrating cement particles in suspension will cause them to be rearranged, which will lead to the water contained in flocks being released,” he describes, adding that “the greater the vibration, the more the particles will rearrange and the more water will be released”.

Once these trials are finished, the concrete’s mechanical performances will be analyzed. One way this will be done is by exerting mechanical pressure on an object, in this case, a sample of concrete, to measure its resistance to said pressure. The results obtained from this experiment will be connected with microscope observations, Turbiscan tests and trials varying the parameters of the cement paste formula. All of Abadassi’s results will be used to create a range of formulas that can be utilized by concrete production companies. This will provide them with the optimal quantities of components, such as water and superplasticizers, to include when preparing cement for use in concrete. In this way, the quality and durability of the most widely used manufactured material in the world could be improved.

Rémy Fauvel

Cleaning up polluted tertiary wastewater from the agri-food industry with floating wetlands

In 2018, IMT Atlantique researchers launched the FloWAT project, based on a hydroponic system of floating wetlands. It aims to reduce polluting emissions from treated wastewater into the discharge site.

Claire Gérente, researcher at IMT Atlantique, has been coordinating the FloWat1 decontamination project, funded by the French National Agency for Research (ANR), since its creation. The main aim of the initiative is to provide complementary treatment for tertiary wastewater from the agri-food industry, using floating wetlands. Tertiary wastewater is effluent that undergoes a final phase in the water treatment process to eliminate residual pollutants. It is then drained into the discharge site, an aquatic ecosystem where treated wastewater is released.

These wetlands act as filters for particle and dissolved pollutants. They can easily be added to existing waste stabilization pond systems in order to further treat this water. One of this project’s objectives is to improve on conventional floating wetlands to increase phosphorus removal, or even collect it for reuse, thereby reducing the pressure on this non-renewable resource.

In this context, research is being conducted around the use of a particular material, cellular concrete, to allow phosphorus to be recovered. “Phosphorus removal is of great environmental interest, particularly as it reduces the eutrophication of natural water sources that are discharge sites for treated effluent,” states Gérente. Eutrophication is a process characterized by an increase in nitrogen and phosphorus concentration in water, leading to ecosystem disruption.

Floating wetlands: a nature-based solution

The floating wetland system involves covering an area of water, typically a pond, with plants placed on a floating bed, specifically sedges. The submerged roots act as filters, retaining the pollutants found in the water via various physical, chemical and biological processes. This mechanism is called phytopurification.

Floating wetlands are part of an approach known as nature-based solutions, whereby natural systems, less costly than conventional technologies, are implemented to respond to ecological challenges. To function efficiently, the most important thing is to “monitor that the plants are growing well, as they are the site of decontamination,” emphasizes Gérente.

In order to meet the project objectives, a pilot study was set up on an industrial abattoir and meat processing site. After being biologically treated, real agri-food effluent is discharged into four pilot ponds, three of which that are covered with floating wetlands of various sizes, and one that is uncovered, as a control. The experimental site is entirely automated and can be controlled remotely to facilitate supervision.

Performance monitoring is undertaken for the treatment of organic matter, nitrogen, phosphorus and suspended matter. As well as data on the incoming and outgoing water quality, physico-chemical parameters and climate data are constantly monitored. The outcome for pollutants in the different components of the treatment system will be identified by sampling and analysis of plants, sediment and phosphorus removal material.

These floating wetlands will be the first to be easy to dismantle and recycle, improved for phosphorus removal and even collection, as well as able to treat suspended matter, carbon pollution and nutrients.

L’attribut alt de cette image est vide, son nom de fichier est MF-2.jpg.
Photograph of the experimental system

Improving compliance with regulation

In 1991, the French government established a limit on phosphorus levels to reduce water pollution, in order to preserve biodiversity and prevent algal bloom, which is when one or several algae species grow rapidly in an aquatic system.

The floating wetlands developed by IMT Atlantique researchers could allow these thresholds to be better respected, by improving capacities for water treatment. Furthermore, they are part of a circular economy approach, as beyond collecting phosphorus for reuse, the cellular concrete and polymers used as plant supports are recyclable or reusable.

Further reading on I’MTech: Circular economy, environmental assessment and environmental budgeting

To create these wetlands, you simply have to place the plants on the discharge ponds. This makes this technique cheap and easy to implement. However, while such systems integrate rather well into the landscape, they are not suitable for all environments. The climate in northern countries, for example, may slow down or impair how the plants function. Furthermore, results take longer to obtain with natural methods like floating wetlands than with conventional methods. Nearly 7000 French agri-food companies have been identified as potential users for these floating wetlands. Nevertheless, the FloWAT coordinator reminds us that “this project is a feasability study, our role is to evaluate the effectiveness of floating wetlands as a filtering system. We will have to wait until the project finishes in 2023 to find out if this promising treatment system is effective.

Rémy Fauvel

David Gesbert, winner of the 2021 IMT-Académie des Sciences Grand Prix

EURECOM researcher David Gesbert is one of the pioneers of Multiple-Input Multiple-Output (MIMO) technology, used nowadays in many wireless communication systems. He contributed to the boom in WiFi, 3G, 4G and 5G technology, and is now exploring what could be the 6G of the future. In recognition of his body of work, Gesbert has received the IMT-Académie des Sciences Grand Prix.

I’ve always been interested by research in the field of telecommunications. I was fascinated by the fact that mathematical models could be converted into algorithms used to make everyday objects work,” declares David Gesbert, researcher and specialist in wireless telecommunications systems at EURECOM. Since he completed his studies in 1997, Gesbert has been working on MIMO, a telecommunications system that was created in the 1990s. This technology makes it possible to transfer data streams at high speeds, using multiple transmitters and receivers (such as telephones) in conjunction. Instead of using a single channel to send information, a transmitter can use multiple spatial streams at the same time. Data is therefore transferred more quickly to the receiver. This spatialized system represents a breaking point with previous modes of telecommunication, like the Global System for Mobile Communications (GSM).

It has proven to be an important innovation, as MIMO is now broadly used in WiFi systems and several generations of mobile telephone networks, such as 4G and 5G. After receiving his PhD from École Nationale Supérieure des Télécommunications in 1997, Gesbert completed two years of postdoctoral research at Stanford University. He joined the telecommunications laboratory directed by Professor Emeritus Arogyaswami Paulraj, an engineer who worked on the creation of MIMO. In the early 2000s, the two scientists, accompanied by two students, launched the start-up Iospan Wireless. This was where they developed the first high-speed wireless modem using MIMO-OFDM technology.

OFDM: Orthogonal Frequency-Division Multiplexing

OFDM is a process that improves communication quality by dividing a high-debit data stream into many low-debit data streams. By combining this mechanism with MIMO, it is possible to transfer data at high speeds while making the information generated by MIMO more robust against radio distortion. “These features make it great for use in deploying telecommunications systems like 4G or 5G,” adds the researcher.  

In 2001, Gesbert moved to Norway, where he taught for two years as adjunct professor in the IT department at the University of Oslo. One year later, he published an article in which he described that complex propagation environments favor the functioning of MIMO. “This means that the more obstacles there are in a place, the more the waves generated by the antennas are reflected. The waves therefore travel different paths and interference is reduced, which leads to more efficient data transfer. In this way, an urban environment in which there are many buildings, cars, and other objects will be more favorable to MIMO than a deserted area,” explains the telecommunications expert.  

In 2003, he joined EURECOM, where he became a professor and five years later, head of the Mobile Communications department. There, he has continued his work aiming to improve MIMO. His research has shown him that base stations — also known as relay antennas — could be useful to improve the performance of this mechanism. By using antennas from multiple relay stations far apart from each other, it would be possible to make them work together and produce a giant MIMO system. This would help to eliminate interference problems and optimize the circulation of data streams. Research is still being performed at present to make this mechanism usable.

MIMO and robots

In 2015, Gesbert obtained an ERC Advanced Grant for his PERFUME project. The initiative, which takes its name from high PERfomance FUture Mobile nEtworking, is based on the observation that “the number of receivers used by humans and machines is currently rising. Over the next few years, these receivers will be increasingly connected to the network,” emphasizes the researcher. The aim of PERFUME is to exploit the information resources of receivers so that they work in cooperation, to improve their performance. The MIMO principle is at the heart of this project: spatializing information and using multiple channels to transmit data. To achieve this objective, Gesbert and his team developed base stations attached to drones. These prototypes use artificial intelligence systems to communicate between one another, in order to determine which bandwidth to use or where to place themselves to give a user optimal network access. Relay drones can also be used to extend radio range. This could be useful, for example, if someone is lost on a mountain, far from relay antennas, or in areas where a natural disaster has occurred and the network infrastructure has been destroyed.

As part of this project, the EURECOM professor and his team have performed research into decision-making algorithms. This has led them to develop artificial neuron networks to improve decision-making processes performed by the receivers or base stations desired to cooperate together. With these neuron networks, the devices are capable of quantifying and exploiting the information held by each of themAccording to Gesbert, “this will allow receivers or stations with more information to correct flaws in receivers with less. This idea is a key takeaway from the PERFUME project, which finished at the end of 2020. It indicates that to cooperate, agents like radio receivers or relay stations make decisions based on sound data, which sometimes has to be rejected to let themselves be guided by decisions from agents with access to better information than them. It is a surprising result, and a little counterintuitive.”

Towards the 6th generation of mobile telecommunications technology

“Nowadays, two major areas are being studied concerning the development of 6G,” announces Gesbert. The first relates to ways of making networks more energy efficient by reducing the number of times that transmissions take place, by restricting the amount of radio waves emitted and reducing interference. One solution to achieve these objectives is to use artificial intelligence. “This would make it possible to optimize resource allocation and use radio waves in the best way possible,” adds the expert.

The second concerns applications of radio waves for purposes other than communicating information. One possible use for the waves would be to produce images. Given that when a wave is transmitted, it reflects off a large number of obstacles, artificial intelligence could analyze its trajectory to identify the position of obstacles and establish a map of the receiver’s physical environment. This could, for example, help self-driving cars determine their environment in a more detailed way. With 5G, the target precision for locating a position is around a meter, but 6G could make it possible to establish centimeter-level precision, which is why these radio imaging techniques could be useful. While this 6th-generation mobile telecommunications network will have to tackle new challenges, such as the energy economy and high-accuracy positioning, it seems clear that communication spatialization and MIMO will continue to play a fundamental role.

Rémy Fauvel

zero-click attacks

Zero-click attacks: spying in the smartphone era

Zero-click attacks exploit security breaches in smartphones in order to hack into a target’s device without the target having to do anything. They are now a threat to everyone, from governments to medium-sized companies.

“Zero-click attacks are not a new phenomenon”, says Hervé Debar, a researcher in cybersecurity at Télécom SudParis. “In 1988 the first computer worm, named the “Morris worm” after its creator, infected 6,000 computers in the USA (10% of the internet at the time) without any human intervention, causing damage estimated at several million dollars.” By connecting to messenger servers which were open access by necessity, this program exploited weaknesses in server software, infecting it. It could be argued that this was one of the very first zero-click attacks, a type of attack which exploits security breaches in target devices without the victim having to do anything.

There are two reasons why this type of attack is now so easy to carry out on smartphones. Firstly, the protective mechanisms for these devices are not as effective as those on computers. Secondly, more complex processes are required in order to present videos and images, meaning that the codes enabling such content to be displayed are often more complex than those on computers. This makes it easier for attackers to hack in and exploit security breaches in order to spread malware. As Hervé Debar explains, “attackers must, however, know certain information about their target – such as their mobile number or their IP address – in order to identify their phone. This is a targeted type of attack which is difficult to deploy on a larger scale as this would require collecting data on many users.”

Zero-click attacks tend to follow the same pattern: the attacker sends a message to their target containing specific content which is received in an app. This may be a sound file, an image, a video, a gif or a pdf file containing malware. Once the message has been received, the recipient’s phone processes it using apps to display the content without the user having to click on it. While these applications are running, the attacker exploits breaches in their code in order to run programs resulting in spy software being installed on the target device, without the victim knowing.

Zero-days: vulnerabilities with economic and political impact

Breaches exploited in zero-click attacks are known as “zero-days”, vulnerabilities which are unknown to the manufacturer or which have yet to be corrected. There is now a global market for the detection of these vulnerabilities: the zero-day market, which is made up of companies looking for hackers to identify these breaches. Once the breach has been identified, the hacker will produce a document explaining it in detail, with the company who commissioned the document often paying several thousand dollars to get their hands on it. In some cases the manufacturer themselves might buy such a document in an attempt to rectify the breach. But it may also be bought by another company looking to sell the breach to their clients – often governments – for espionage purposes. According to Hervé Debar, between 100 and 1,000 vulnerabilities are detected on devices each year. 

Zero-click attacks are regularly carried out for theft or espionage purposes. For theft, the aim may be to validate a payment made by the victim in order to divert their money. For espionage, the goal might be to recover sensitive data about a specific individual. The most recent example was the Pegasus affair, which affected around 50,000 potential victims, including politicians and media figures. “These attacks may be a way of uncovering secret information about industrial, economic or political projects. Whoever is responsible is able to conceal themselves and to make it difficult to identify the origin of the attack, which is why they’re so dangerous”, stresses Hervé Debar. But it is not only governments and multinationals who are affected by this sort of attack – small and medium-sized companies are too. They are particularly vulnerable in that, owing to a lack of financial resources, they don’t have IT professionals running their systems, unlike major organisations.

Also read on I’MTech Cybersecurity: high costs for companies

More secure computer languages

But there are things that can be done to limit the risk of such attacks affecting you. According to Hervé Debar, “the first thing to do is use your common sense. Too many people fall into the trap of opening suspicious messages.” Personal phones should also be kept separate from work phones, as this prevents attackers from gaining access to all of a victim’s data. Another handy tip is to back up your files onto an external hard drive. “By transferring your data onto an external hard drive, it won’t only be available on the network. In the event of an attack, you will safely be able to recover your data, provided you disconnected the disc after backing up.” To protect against attacks, organisations may also choose to set up intrusion detection systems (IDS) or intrusion prevention systems (IPS) in order to monitor flows of data and access to information.

In the fight against cyber-attacks, researchers have developed alternative computing languages. Ada, a programming language which dates back to the 1980s, is now used in the aeronautic industry, in railways and in aviation safety. For the past ten years or so the computing language Rust has been used to solve problems linked to the management of buffer memory which were often encountered with C and C++, languages widely used in the development of operating systems. “These new languages are better controlled than traditional programming languages. They feature automatic protective mechanisms to prevent errors committed by programmers, eliminating certain breaches and certain types of attack.” However, “writing programs takes time, requiring significant financial investment on the part of companies, which they aren’t always willing to provide. This can result in programming errors leading to breaches which can be exploited by malicious individuals or organisations.”

Rémy Fauvel

SONATA

SONATA: an approach to make data sound better

Telecommunications must transport data at an ever-faster pace to meet the needs of current technologies. But this data can be voluminous and difficult to transport at times. Communication channels are congested and transmission limits are reached quickly. Marios Kountouris, a telecommunications researcher at EURECOM, has recently received ERC funding to launch his SONATA project. It aims to shift the paradigm for processing information to speed up its transmission and make future networks more efficient.

We are close to the fundamental limit for transmitting data, from one point to another,” explains Marios Kountouris, a telecommunications researcher at EURECOM. Most of the current research in this discipline focuses on how to organize complex networks and on improving the algorithms that optimize these networks. Few projects, however, focus on improving the transfer of data between transmitters and receivers. This is precisely the focus of Marios Kountouris’ SONATA project, funded by a European ERC consolidator grant.

Telecommunications are generally based on Shannon’s information theory, which was established in the 1950s,” says the researcher. In this theory, a transmitter simply sends information through a transmission channel, which models it and transfers it to a receiver which then reconstructs it. The main obstacle to get around is the noise accompanying the signal when it passes through the transmission channel. This constraint can be overcome by algorithm-based signal processing and by increasing throughput. “This usually takes place in the same way, regardless of the message being transmitted. Back in the early days, and until recently, this was the right approach,” says the researcher.

Read more on I’MTech: Claude Shannon, a legacy transcending digital technology

Transmission speed for real-time communication

Today, there is an increasing amount of communication between machines that reason in milliseconds. “Certain messages must be transmitted quickly or they’re useless,” says Marios Kountouris. For example, in the development of autonomous cars, if the message collected relates to the detection of a pedestrian on the road so as to make the vehicle brake, it is only useful for a very short period of time. “This is what we call the age, or freshness of information, which is a very important parameter in some cases,” explains Marios Kountouris.

Yet, most transmission and reconstruction is slowed down by surplus information accompanying the message. In the previous example, if the system for detecting pedestrians is a camera that captures images with details about all the surrounding objects, a great deal of the information in the transmission and processing will not contribute to the system’s purpose. For the researcher, “the sampling, transmission and reconstruction of the message must no longer be carried out independently of another. If excess, redundant or useless data accompanies this process, there can be communication bottlenecks and security problems.”  

The semantics of messages

For real-time communication, the semantics of the message  — its meaning and usefulness— take on particular importance. Semantics make it possible to take into account the attributes of the message and adjust the format of its transmission depending on its purpose. For example, if a temperature sensor is meant to activate the heating system automatically when the room temperature is below 18° C, the attribute of the transmitted message is simply a binary breakdown of temperature: above or below 18°C.

Through the SONATA project, Marios Kountouris seeks to develop a new communication paradigm that takes the semantic value of information into account. This would make it possible to synchronize different types of information collected at the same time through various samples and make more optimal decisions. It would also significantly reduce the volume of transported data as well as the associated energy and resources required.

The success of this project depends on establishing semantic metrics that are concrete, informative and traceable,” explains the researcher. Establishing the semantics of a message means preprocessing sampling by the transmitter depending on how it is used by the receiver. The aim is therefore to identify the most important, meaningful or useful information in order to determine the qualifying attributes of the message. “Various semantic attributes can be taken into account to obtain a conformal representation of the information, but they must be determined in advance, and we have to be careful not to implement too many attributes at once,” he says.

The goal, then, is to build communication networks with key stages for processing the semantics associated with information. First, semantic filters must be used to avoid unnecessary redundancy when collecting information. Then, semantic preprocessing must be carried out in order to associate the data with its purposes. Signal reconstruction by the receiver would also be adapted to its purposes. All this would be semantically-controlled, making it possible to orchestrate the information collected in an agile way and reuse it efficiently, which is especially important when networks become more complex.

This is a new approach from a structural perspective and would help create links between communication theory, sampling and optimal decision-making. ERC consolidator grants fund high-risk, high-reward projects that aim to revolutionize a field, which is why SONATA has received this funding. “The sonata was the most sophisticated form of classical music and was pivotal to its development. I hope that SONATA will be a major step forward in telecommunications optimization,” concludes Marios Kountouris.

By Antonin Counillon

Facial recognition: what legal protection exists?

Over the past decade, the use of facial recognition has developed rapidly for both security and convenience purposes. This biometrics-based technology is used for everything from video surveillance to border controls and unlocking digital devices. This type of data is highly sensitive and is subject to specific legal framework. Claire Levallois-Barth, a legal researcher at Télécom Paris and coordinator of the Values and Policies of Personal Information Chair at IMT provides the context for protecting this data.

What laws govern the use of biometric data?

Claire Levallois-Barth: Biometric data “for the purpose of uniquely identifying a natural person” is part of a specific category defined by two texts adopted by the 27 Member States of the European Union in April 2016, the General Regulation Data Protection Regulation (GDPR) and the Directive for Police and Criminal Justice Authorities. This category of data is considered highly sensitive.

The GDPR applies to all processing of personal data in both private and public sectors.

The Directive for Police and Criminal Justice Authorities pertains to processing carried out for purposes of prevention, detection, investigation, and prosecution of criminal offences or the execution of criminal penalties by competent authorities (judicial authorities, police, other law enforcement authorities). It specifies that biometric data must only be used in cases of absolute necessity and must be subject to provision of appropriate guarantees for the rights and freedoms of the data subject. This type of processing may only be carried out in three cases: when authorized by Union law or Member State law, when related to data manifestly made public by the data subject, or to protect the vital interests of the data subject or another person.

What principles has the GDPR established?

CLB: The basic principle is that collecting and processing biometric data is prohibited due to significant risks of violating basic rights and freedoms, including the freedom to come and go anonymously. There are, however, a series of exceptions. The processing must fall under one of these exceptions (express consent from the data subject, protection of his or her vital interests, conducted for reasons of substantial public interest) and respect all of the obligations established by the GDPR. The key principle is that the use of biometric data must be strictly necessary and proportionate to the objective pursued. In certain cases, it is therefore necessary to obtain the individual’s consent, even when the facial recognition system is being used on an experimental basis. There is also the minimization principle, which systematically asks, “is there any less intrusive way of achieving the same goal?” In any case, organizations must carry out an impact assessment on people’s rights and freedoms.

What do the principles of proportionality and minimization look like in practice?

CLB: One example is when the Provence-Alpes-Côte d’Azur region wanted to experiment with facial recognition at two high schools in Nice and Marseille. The CNIL ruled that the system involving students, most of whom were minors, for the sole purpose of streamlining and securing access, was not proportionate to these purposes. Hiring more guards or implementing a badge system would offer a sufficient solution in this case.

Which uses of facial recognition have the greatest legal constraints?

CLB: Facial recognition can be used for various purposes. The purpose of authentication is to verify whether someone is who he or she claims to be. It is implemented in technology for airport security and used to unlock your smartphone. These types of applications do not pose many legal problems. The user is generally aware of the data processing that occurs, and the data is usually processed locally, by a card for example.

On the other hand, identification—which is used to identify one person within a group—requires the creation of a database that catalogs individuals. The size of this database depends on the specific purposes. However, there is a general tendency towards increasing the number of individuals. For example, identification can be used to find wanted or missing persons, or to recognize friends on a social network. It requires increased vigilance due to the danger of becoming extremely intrusive.

Facial recognition has finally provided a means of individualizing a person. There is no need to identify the individual–the goal is “simply” to follow people’s movements through the store to assess their customer journey or analyze their emotions in response to an advertisement or while waiting at the checkout. The main argument advertisers use to justify this practice is that the data is quickly anonymized, and no record is kept of the person’s face. Here, as in the case of identification, facial recognition usually occurs without the person’s knowledge.

How can we make sure that data is also protected internationally?

CLB: The GDPR applies in the 27 Member States of the European Union which have agreed on common rules. Data can, however, be collected by non-European companies. This is the case for photos of European citizens collected from social networks and news sites. This is one of the typical activities of the company Clearview AI, which has already established a private database of 3 billion photos.

The GDPR lays down a specific rule for personal data leaving European Union territory: it may only be exported to a country ensuring a level of protection deemed comparable to that of the European Union. Yet few countries meet this condition. A first option is therefore for the data importer and exporter to enter into a contract or adopt binding corporate rules. The other option, for data stored on servers on U.S. territory, was to build on the Privacy Shield agreement concluded between the Federal Trade Commission (FTC) and the European Commission. However, this agreement was invalidated by the Court of Justice of the European Union in the summer of 2020. We are currently in the midst of a legal and political battle. And the battle is complicated since data becomes much more difficult to control once it is exported. This explains why certain stakeholders, such as Thierry Breton (the current European Commissioner for Internal Market), have emphasized the importance of fighting to ensure European data is stored and processed in Europe, on Europe’s own terms.

Despite the risks and ethical issues involved, is facial recognition sometimes seen as a solution for security problems?

CLB: It can in fact be a great help when implemented in a way that respects our fundamental values. It depends on the specific terms. For example, if law enforcement officers know that a protest will be held, potentially involving armed individuals, at a specific time and place, facial recognition can prove very useful at that specific time and place. However, it is a completely different scenario if it is used constantly for an entire region and entire population in order to prevent shoplifting.

This summer, the London Court of Appeal ruled that an automatic facial recognition system used by Welsh police was unlawful. The ruling emphasized a lack of clear guidance on who could be monitored and accused law enforcement officers of failing to sufficiently verify whether the software used had any racist or sexist bias. Technological solutionism, a school of thought emphasizing new technology’s capacity to solve the world’s major problems, has its limitations.

Is there a real risk of this technology being misused in our society?

CLB: A key question we should ask is whether there is a gradual shift underway, caused by an accumulation of technology deployed at every turn. We know that video-surveillance cameras are installed in public roads, yet we do not know about additional features that are gradually added, such as facial recognition or behavioral recognition.  The European Convention of Human Rights, GDPR, the Directive for Police and Criminal Justice Authorities, and the CNIL provide safeguards in this area.

However, they provide a legal response to an essentially political problem. We must prevent the accumulation of several types of intrusive technologies that come without prior reflection on the overall result, without taking a step back to consider the consequences. What kind of society do we want to build together? Especially within the context of a health and economic crisis. The debate on our society remains open, as do the means of implementation.

Interview by Antonin Counillon

Étienne Perret, IMT-Académie des sciences Young Scientist prize

What if barcodes disappeared from our supermarket items? Étienne Perret, a researcher in radio-frequency electronics at Grenoble INP, works on identification technologies. His work over recent years has focused on the development of RFID without electronic components, commonly known as chipless RFID. The technology aims to offer some of the advantages of classical RFID but at a similar cost to barcodes, which are more commonly used in the identification of objects. This research is very promising for use in product traceability and has earned Étienne Perret the 2020 IMT-Académie des sciences Young Scientist Prize.

Your work focuses on identification technologies: what is it exactly?

Étienne Perret: The identification technology most commonly known to the general public is the barcode. It is on every item we buy. When we go to the checkout, we know that the barcode is used to identify objects. Studies estimate that 70% of products manufactured across the world have a barcode, making it the most widely used identification technique. However, it is not the only one, there are other technologies such as RFID (radio frequency identification). It is what is used on contactless bus tickets, ski passes, entry badges for certain buildings, etc. It is a little more mysterious, it’s harder to see what’s behind it all. That said, the idea behind it is the same, regardless of the technology. The aim is to identify an item at short or medium range.

What are the current challenges surrounding these identification technologies?

EP: In lots of big companies, especially Amazon, object traceability is essential. They often need to be able to track a product from the different stages of manufacturing right through to its recycling. Each product therefore has to be able to be identified quickly. However, both of the current technologies I mentioned have limitations as well as advantages. Barcodes are inexpensive, can be printed easily but store very little information and often require human input between the scanner and the code to make sure it is read correctly. What is more, barcodes have to be visible in order to be read, which has an effect on the integrity of the product to be traced.

RFID, on the other hand, uses radio waves that pass through the material, allowing us to identify an object already packaged in a box from several meters away. However, this technology is costly. Although an RFID label only costs a few cents, it is much more expensive than a barcode. For a company that has to label millions of products a year, the difference is huge, in particular when it comes to labeling products that are worth no more than a few cents themselves.

What is the goal of your research in this context?

EP: My aim is to propose a solution in between these two technologies. At the heart of an RFID tag there is a chip that stores information, like a microprocessor. The idea I’m pursuing with my colleagues at Grenoble INP is to get rid of this chip, for economic and environmental reasons. The other advantage that we want to keep is the barcode’s ease of printing. To do so, we base our work on an unusual approach combining conductive ink and geometric labels.

How does this approach work?  

EP: The idea is that each label has a unique geometric form printed in conductive ink. Its shape means that the label reflects radio frequency waves in a unique way. After that, it is a bit like a radar approach: a transmitter emits a wave, which is reflected by its environment, and the label returns the signal with a unique signature indicating its presence. Thanks to a post-processing stage, we can then recover this signature containing the information on the object.

Why is this chipless RFID technology so promising?

EP: Economically speaking, the solution would be much more advantageous than an RFID chip and could even rival the cost of a barcode. Compared to the latter, however, there are two major advantages. First of all, this technology can read through materials, like RFID. Secondly, it requires a simpler process to read the label. When you go through the supermarket checkout, the product has to be at a certain angle so that the code is facing the laser scanner. That is another problem with barcodes: a human operator is often required to carry out the identification and while it is possible to do without, it requires very expensive automated systems. Chipless RFID technology is not perfect, however, and certain limitations must be accepted, such as the reading distance, which is not the same as for conventional RFID which can reach several meters using ultra high frequency waves.

One of the other advantages of RFID is the ability to reprogram it: the information contained in an RFID tag can be changed. Is this possible with the chipless RFID technology you are developing?

EP: That is indeed one of the current research projects. In the framework of the ERC ScattererID project, we are seeking to develop the concept of rewritable chipless labels. The difficulty is obviously that we can’t use electronic components in the label. Instead, we’re basing our approach on CBRAM (conductive-bridging RAM) which is used for specific types of memories. It works by stacking three layers: metal-dielectric material-metal. Imagine a label printed locally with this type of stack. By applying a voltage to the printed pattern we can modify its properties and thus change the information contained in the label.

Does this research on chipless RFID technology have other applications than product traceability and identification?

EP: Another line of research we are looking into is using these chipless labels as sensors. We have shown that we can collect and report information on physical quantities such as temperature and humidity. For temperature, the principle is based on the ability to measure the thermal expansion of the materials that make up the label. The material “expands” by a few tens of microns. The label’s radiofrequency signature changes, and we are able detect these very subtle variations. In another field, this level of precision, obtained using radio waves, which are wireless, allows the label to be located and its movements detected. Based on this principle, we are currently also studying gestural recognition to allow us to communicate with the reader through the label’s movements.

The transfer of this technology to industry seems inevitable: where do you stand on this point?

EP: A recent project with an industrial actor led to the creation of the start-up Idyllic Technology, which aims to market chipless RFID technology to industrial firms. We expect to start presenting our innovations to companies during the course of next year. At present, it is still difficult for us to say where this technology will be used. There’s a whole economic dimension which comes into play, which will be decisive in its adoption. What I can say, though, is that I could easily see this solution being used in places where the barcode isn’t used due to its limitations, but where RFID is too expensive. There’s a place between the two, but it’s still too early to say exactly where.

Gaël richard

Gaël Richard, IMT-Académie des sciences Grand Prix

Speech synthesis, sound separation, automatic recognition of instruments or voices… Gaël Richard‘s research at Télécom Paris has always focused on audio signal processing. The researcher has created numerous acoustic signal analysis methods, thanks to which he has made important contributions to his discipline. These methods are currently used in various applications for the automotive and music industries. His contributions to the academic community and technology transfer have earned him the 2020 IMT-Académie des sciences Grand Prix

Your early research work in the 1990s focused on speech synthesis: why did you choose this discipline?

Gaël Richard: I didn’t initially intend to become a researcher; I wanted to be a professional musician. After my baccalaureate I focused on classical music before finally returning to scientific study. I then oriented my studies toward applied mathematics, particularly audio signal processing. During my Master’s internship and then my PhD, I began to work on speech and singing voice synthesis. In the early 1990s, the first perfectly intelligible text-to-speech systems had just been developed. The aim at the time was to achieve a better sound quality and naturalness and to produce synthetic voices with more character and greater variability.

What research have you done on speech synthesis?

GR: To start with, I worked on synthesis based on signal processing approaches. The voice is considered as being produced by a source – the vocal cords – which passes through a filter – the throat and the nose. The aim is to represent the vocal signal using the parameters of this model to either modify a recorded signal or generate a new one by synthesis. I also explored physical modeling synthesis for a short while. This approach consists in representing voice production through a physical model: vocal cords are springs that the air pressure acts on. We then use fluid mechanics principles to model the air flow through the vocal tract to the lips.

What challenges are you working on in speech synthesis research today?

GR: I have gradually extended the scope of my research to include subjects other than speech synthesis, although I continue to do some work on it. For example, I am currently supervising a PhD student who is trying to understand how to adapt a voice to make it more intelligible in a noisy environment. We are naturally able to adjust our voice in order to be better understood when surrounded by noise. The aim of his thesis, which he is carrying out with the PSA Group, is to change the voice of a radio, navigation assistant (GPS) or telephone, initially pronounced in a silent environment, so that it is more intelligible in a moving car, but without amplifying it.

As part of your work on audio signal analysis, you developed different approaches to signal decomposition, in particular those based on “non-negative matrix factorization”. It was one of the greatest achievements of your research career, could you tell us what’s behind this complex term?

GR: The additive approach, which consists in gradually adding the elementary components of the audio signal, is a time-honored method. In the case of speech synthesis, it means adding simple waveforms – sinusoids – to create complex or rich signals. To decompose a signal that we want to study, such as a natural singing voice, we can logically proceed the opposite way, by taking the starting signal and describing it as a sum of elementary components. We then have to say which component is activated and at what moment to recreate the signal in time.

The method of non-negative matrix factorization allows us to obtain such a decomposition in the form of the multiplication of two matrices: one matrix represents a dictionary of the elementary components of the signal, and the other matrix represents the activation of the dictionary elements over time. When combined, these two matrices make it possible to describe the audio signal in mathematical form. “Non-negative” simply means that each element in these matrices is positive, or that each source or component contributes positively to the signal.

Why is this signal decomposition approach so interesting?

GR: This decomposition is very efficient for introducing initial knowledge into the decomposition. For example, if we know that there is a violin, we can introduce this knowledge into the dictionary by specifying that some of the elementary atoms of the signal will be characteristic of the violin. This makes it possible to refine the description of the rest of the signal. It is a clever description because it is simple in its approach and handling as well as being useful for working efficiently on the decomposed signal.

This non-negative matrix factorization method has led you to subjects other than speech synthesis. What are its applications?

GR: One of the major applications of this technique is source separation. One of our first approaches was to extract the singing voice from polyphonic music recordings. The principle consists in saying that, for a given source, all the elementary components are activated at the same time, such as all the harmonics of a note played by an instrument, for example. To simplify, we can say that non-negative matrix factorization allows us to isolate each note played by a given instrument by representing them as a sum of elementary components (certain columns of the “dictionary” matrix) which are activated over time (certain lines of the “activation” matrix). At the end of the process, we obtain a mathematical description in which each source has its own dictionary of elementary sound atoms. We can then replay only the sequence of notes played by a specific instrument by reconstructing the signal by multiplying the non-negative matrices and setting to zero all note activations that do not correspond to the instrument we want to isolate.

What new prospects can be considered thanks to the precision of this description?

GR: Today, we are working on “informed” source separation which incorporates additional prior knowledge about the sources in the source separation process. I currently co-supervise a PhD student who is using the knowledge of lyrics to help the separation of the isolate singing voices. There are multiple applications: from automatic karaoke generation by removing the detected voice, to music and movie sound track remastering or transformation. I have another PhD student whose thesis is on isolating a singing voice using the simultaneously recorded electroencephalogram (EEG) signal. The idea is to ask a person to wear an EEG cap and focus their attention on one of the sound sources. We can then obtain information via the recorded brain activity and use it to improve the source separation.

Your work allows you to identify specific sound sources through audio signal processing… to the point of automatic recognition?

GR: We have indeed worked on automatic sound classification, first of all through tests on recognizing emotion, particularly fear or panic. The project was carried out with Thales to anticipate crowd movements. Besides detecting emotion, we wanted to measure the rise or fall in panic. However, there are very few sound datasets on this subject, which turned out to be a real challenge for this work. On another subject, we are currently working with Deezer on the automatic detection of content that is offensive or unsuitable for children, in order to propose a sort of parental filter service, for example. In another project on advertising videos with Creaminal, we are detecting key or culminating elements in terms of emotion in videos in order to automatically propose the most appropriate music at the right time.

On the subject of music, is your work used for automatic song detection, like the Shazam application?

GR: Shazam uses an algorithm based on a fingerprinting principle. When you activate it, the app records the audio fingerprint over a certain time. It then compares this fingerprint with the content of its database. Although very efficient, the system is limited to recognizing completely identical recordings. Our aim is to go further, by recognizing different versions of a song, such as live recordings or covers by other singers, when only the studio version is saved in the memory. We have filed a patent on a technology that allows us to go beyond the initial fingerprint algorithm, which is too limited for this kind of application. In particular, we are using a stage of automatic estimation of the harmonic content, or more precisely the sequences of musical chords. This patent is at the center of a start-up project.

Your research is closely linked to the industrial sector and has led to multiple technology transfers. But you also have made several freeware contributions for the wider community.

GR: One of the team’s biggest contributions in this field is the audio extraction software YAAFE. It’s one of my most cited articles and a tool that is regularly downloaded, despite the fact that it dates from 2010. In general, I am in favor of the reproducibility of research and I publish the algorithms of work carried out as often as possible. In any case, it is a major topic of the field of AI and data science, which are clearly following the rise of this discipline. We also make a point of publishing the databases created by our work. That is essential too, and it’s always satisfying to see that our databases have an important impact on the community.

Managing electronic waste: a global problem

Responsibilities surrounding digital waste are multi-faceted. On one side, it is governments’ responsibility to establish tighter border controls to better manage the flow of waste and make sure that it is not transferred to developing countries. On the other side, electronic device manufacturers must take accountability for their position by facilitating end-of-life management of their products. And consumers must be aware of the “invisible” consequences of their uses, since they are outsourced to other countries.

To understand how waste electric and electronic equipment (WEEE) is managed, we must look to the Bâle Convention of 1989. This multilateral treaty was initially intended to manage the cross-border movement of hazardous waste, to which WEEE was later added. “The Bâle Convention resulted in regional agreements and national legislation in a great number of countries, some of whom prohibit the export or import of WEEE,” says Stéphanie Reiche de Vigan, a research professor in sustainable development law and new technologies at Mines ParisTech. “This is the case for the EU regulation on transfer of waste, which prohibits the export of WEEE to third countries.” Nevertheless, in 2015 the EFFACE European research project, devoted to combating environmental crime, estimated that approximately 2 million items of WEEE leave Europe illegally every year. How can so much electronic waste cross borders clandestinely? “A lack of international cooperation hinders efforts to detect, investigate and prosecute environmental crimes related to electronic waste trafficking,” says the researcher. And even if an international agreement on WEEE were to be introduced, it would have little impact without real determination on the part of the waste-producing countries to limit the transfer of this waste. 

This is compounded by the fact that electronic waste trafficking is caught between two government objectives: punishing environmental crimes and promoting international commerce in order to recover market share in international shipping. To increase competitiveness, the London Convention of 1965 aimed at facilitating international shipping, allowed for better movement of vessels, merchandise and passengers through ports. “The results were a simplification of customs procedures to encourage more competitive transit through ports, and distortions of competition between ports of developed countries through minimum enforcement of regulations for cross-border transfer of electronic waste, in particular controls by customs and port authorities,” says Stéphanie Reiche de Vigan. The European Union observed that companies that export and import WEEE tend to use ports where the law was less enforced, and therefore less effective.

So how can this chain of international trafficking be broken? “The International Maritime Organization must address this issue in order to encourage the sharing of best practices and harmonize control procedures,” responds the research professor. It is the responsibility of governments to tighten controls at their ports to limit these crimes. And technology could play a major role in helping them do so. “Making it compulsory to install X-ray scanners in ports and use them to visualize the contents of containers could help reduce the problem,” says Stéphanie Reiche-de Vigan. At present, only 2% of all ocean containers worldwide are physically inspected by customs authorities.

What are the responsibilities of technology companies?

The digital technology chain is divided into separate links: mining, manufacturing, marketing and recycling. The various stages in the lifetime of an electronic device are therefore isolated and disconnected from one another. As such, producers are merely encouraged to collaborate with the recycling industry. “As long as the producers of electric and electronic equipment have no obligation to limit their production, cover recycling costs or improve the recyclability of their products, electronic waste flows cannot be managed,” she says. Solving this problem would involve reconnecting the various parts of the chain through a life cycle analysis of electric and electronic equipment and redefining corporate responsibilities.

Rethinking corporate responsibility would mean putting pressure on tech giants, but developed countries seem to be incapable of doing so. Yet, it is the governments that bear the cost of sorting and recycling. So far, awareness of this issue has not been enough to implement concrete measures that are anything more than guidelines. National Digital Councils in Germany and France have established roadmaps for designing responsible digital technology. They propose areas for future regulation such as extending the lifetime of devices. But there is no easy solution since a device that lasts twice as long means half as much production for manufacturers. “Investing in a few more companies that are responsible for reconditioning devices and extending their lifetime is not enough. We’re still a long way from viable proposals for the environment and the economy,” says Fabrice Flipo, a philosopher of science at Institut Mines-Télécom Business School.

Moreover, countries are not the only ones to come up against the power of big tech companies. “At Orange, starting in 2017, we tried to put a system in place to display environmental information in order to encourage customers to buy phones with the least impact,” says Samuli Vaija, an expert responsible for issues related to product life cycle analysis at Orange. Further upstream in the chain, this measure encouraged manufacturers to incorporate environmental sustainability into their product ranges. When it was presented to the International Telecommunication Union, Orange’s plan was quickly shut down by the American opposition (Apple, Intel), who did not wish to display information about the carbon footprint on its devices.  

Still, civil society, and NGOs in particular, could build political will. The main obstacle: people living in developed countries have little or no awareness of the environmental impacts of their excessive consumption of digital tools, since they are not directly affected by them. “Too often, we forget that there are also violations of human rights behind the digital tools our Western societies rely on, from the extraction of the resources required to manufacture equipment, to the transfer of the waste they produce after just a few years. From the first link to the last, it is primarily people living in developing countries that suffer the impacts of the consumption of those in developed countries. The health impacts are not visible in Europe, since they are outsourced,” says Stéphanie Reiche-de Vigan. In rich countries, is digital technology effectively enclosed in an information bubble containing only the sum of its beneficial aspects? The importance attributed to digital technology must be balanced with its negative aspects.

As such, “it is also the responsibility of universities, engineering schools and business schools to teach students about environmental issues starting at the undergraduate level, while incorporating life cycle analysis and concern for environmental and human impacts in their programs,” says Stéphanie Reiche-de Vigan. Educating students about these issues means bringing these profiles to the companies who will develop the tools of tomorrow and the agencies meant to oversee them.

Guillaume Balarac

Guillaume Balarac, turbulence simulator

Turbulence is a mysterious phenomenon in fluid mechanics. Although it has been observed and studied for centuries, it still holds secrets that physicists and mathematicians strive to unlock. Guillaume Balarac is part of this research community. A researcher at Grenoble INP (at the LEGI Geophysical and Industrial Flows Laboratory), he uses and improves simulations to understand turbulent flows better. His research has given rise to innovations in the energy sector. The researcher, who has recently received the 2019 IMT-Académie des Sciences Young Scientist Award, discusses the scientific and industrial challenges involved in his field of research.

 

How would you define turbulent flows, which are your research specialty?

Guillaume Balarac: They are flows with an unpredictable nature. The weather is a good example for explaining this. We can’t predict the weather more than five days out, because the slightest disturbance at one moment can radically alter what occurs in the following hours or days . It’s the butterfly effect. Fluid flows in the atmosphere undergo significant fluctuations that limit our ability to predict them. This is typical of turbulent flows, unlike laminar flows which are not subject to such fluctuations and whose state may be predicted more easily.

Apart from air mass movements in the atmosphere, where can turbulent flows be found?

GB: Most of the flows that we may encounter in nature are actually turbulent flows. The movement of oceans is described by turbulent flows, as is that of rivers. The movement of molten masses in the Sun generates a turbulent flow. This is also the case for certain biological flows in our bodies, like blood flow near the heart. Apart from nature, these flows are found in rocket propulsion, the motion  of wind turbines and that of hydraulic or gas turbines etc.

Why do you seek to better understand these flows?

GB: First of all, because we aren’t able to do so! It’s still a major scientific challenge. Turbulence is a rather uncharacteristic example – it has been observed for centuries. We’ve all seen a river or felt the wind. But the mathematical description of these phenomena still eludes us. The equations that govern these turbulent flows have been known for two centuries. And the underlying mechanics have been understood since ancient times.  And yet, we aren’t able to solve these equations and we’re ill-equipped to model and understand these events.

You say that researchers can’t solve the equations that govern turbulent flows. Yet, some weather forecasts for several days out are accurate…

GB: The iconic equation that governs turbulent flows is the Navier-Stokes equation. That’s the one that has been known since the 19th century. No one is able to find a solution with a pencil and paper. Finding a unique, exact solution to this equation is even one of the seven millennium problems established by the Clay Mathematics Institute.  As such, the person who finds the solution will be awarded $1 million. That gives you an idea about the magnitude of the challenge. To get around our inability to find this solution, we either try to approach it using computers, as is the case for weather forecasts  — with varying degrees of accuracy — or we try to observe it. And finding a link between observation and equation is no easy task either!

Beyond this challenge, what can a better understanding of turbulent flows help accomplish?

GB: There are a wide range of applications which require an understanding of these flows and the equations that govern them. Our ability to produce energy relies in part on fluid mechanics, for example. Nuclear power plants function with water and steam systems. Hydroelectric turbines work with water flows, as do water current turbines. For wind turbines, it’s air flows.  And these examples are only as far as the energy sector is concerned.

You use high-resolution simulation to understand what happens at the fundamental level in a turbulent flow. How does that work?

GB: One of the characteristics of turbulent flows are eddies. The more turbulent the flow, the more eddies of varying sizes it has. The principle of high resolution simulation is to define billions of points in the space in which the flow is produced, and calculate the fluid velocity at each of these points. This is called a mesh, and it must be fine enough to describe the smallest eddy in the flow. These simulations use the most powerful supercomputers in France and Europe. And even with all that computing power, we can’t simulate realistic situations – only academic flows in idealized conditions . These high-resolution simulations allow us to observe and better understand the dynamics of turbulence in canonical configurations.

Simulation des écoulements turbulents sur une hydrolienne.

Simulation of turbulent flows on a marine turbine.

Along with using these simulation tools, you work on improving them. Are the two related?

GB: They are two complementary approaches. The idea for that portion of my research is to accept that we don’t have the computing power to simulate the Navier-Stokes equation in realistic configurations. So the question I ask myself is – how can this equation be modified so that it can be possible to solve with our current computers, while ensuring that the prediction is still reliable? The approach is to solve the big eddies first. And since we don’t have the power to make a fine enough mesh for the small eddies, we look for physical terms, mathematical expressions, which replace the influence of the small eddies on the big ones. That means that we don’t have the small eddies in this modeling, but their overall contribution to flow dynamics is taken into account. This helps us improve simulation tools by making them able to address flows in realistic conditions.

Are these digital tools you’re developing used solely by researchers?

GB: I seek to carry out research that is both fundamental and application-oriented. For example, we worked with Hydroquest, on the performance of water current turbines to generate electricity. The simulations we carried out made it possible to assess the performance loss due to the support structures, which do not contribute to capturing the energy from the flow. Our research led to patents for new designs, with a 50% increase in yield.

More generally, do energy industry players realize how important it is to understand turbulent flows in order to make their infrastructures more efficient?

GB: Of course, and we have a number of partners who illustrate industrial interest for our research.    For example, we’ve adopted the same approach to improve the design of floating wind turbines. We’re also working with General Electric on hydroelectric dam turbines. These hydraulic turbines are increasingly being used to operate far from their optimal operating point, in order to mitigate the intermittence of renewable solar or wind energy.  In these systems, hydrodynamic instability develops, which has a significant effect on the machines’ performance. So we’re trying to optimize the operation of these turbines to limit yield loss.

What scientific challenges do you currently face as you continue your efforts to improve simulations and our understanding turbulent flows?

GB: At the technical level, we’re trying to improve our simulation codes to take full advantage of advances in supercomputers. We’re also trying to improve our numerical methods and models to increase our predictive capacity.  For example, we’re now trying to integrate learning tools to avoid simulating small eddies and save computing time. I’ve started working with Ronan Fablet, a researcher at IMT Atlantique, on precisely this topic. Then, there’s the huge challenge of ensuring the reliability of the simulations carried out. As it stands now, if you give a simulation code to three engineers, you’ll end up with different models. This is due to the fact the tools aren’t objective, and a lot depends on the individuals using them. So we’re working on mesh and simulation criteria that are objective. This should eventually make it possible for industry players and researchers to work with the same foundations,  and better understand one another when discussing turbulent flows.