Posts

Antoine Fécant

Antoine Fécant, winner of the 2021 IMT-Académie des Sciences Young Scientist Prize

Antoine Fécant, new energy materials researcher at IFP Energies Nouvelles, has worked on many projects relating to solar and biosourced fuel production and petrol refining. His work has relevance for the energy transition and, this year, was recognized by the IMT-Académie des Science Young Scientist Prize.

Energy is a central part of our lifestyles,” affirms Antoine Fécant, new energy materials researcher at IFP Energies Nouvelles. “When I was younger, I wanted to work in this area and my interest in chemistry convinced me to pursue this field. I have always been attracted by the beauty of science, and I find even greater satisfaction in directing my work so that it is concretely useful for our society.” His research since 2004 has mainly focused on materials that speed up chemical processes, known as catalysts.

Antoine Fécant’s initial research was based on a class of catalysts called zeolites. Zeolites are materials mainly made of silicon, aluminum and oxygen. They are found naturally, but it is also possible and often preferable to synthesize them. These minerals contain networks of porosity that can be used to limit the quantity of by-products generated. Zeolites are useful for optimizing the yield of chemical reactions and energy consumption, and thus limiting the CO2 and waste produced.

The main idea of Antoine Fécant’s thesis, undertaken between 2004 and 2007, was to develop a unique methodology to generate new zeolites. For this, he used a multidisciplinary approach and chose to pair combinatorial chemistry with molecular modeling to “identify ways to synthesize zeolites depending on the kind of porous structure desired,” he describes. This methodology allowed us to define streamlining criteria and therefore very significantly speed up research and development work in this area,” Antoine Fécant continues.

15 years ago, this approach was completely innovative and won him the “Yves Chauvin” thesis prize in 2008. Now, however, it is widespread in the fields of chemistry, biochemistry and genomics, showing the trailblazing nature of the researcher’s approach.

Improving solar energy production and recycling CO2

After completing his PhD, Antoine Fécant took the post of research engineer at IFP Énergies Nouvelles. Continuing to pursue his goal of offering technical solutions to contain greenhouse gas emissions, in 2011, the researcher began a project aiming to develop materials and processes to recycle CO2 using solar energy. This work won him the 2012 Young Researcher Award from the City of Lyon. The initiative stems from the intermittent nature of solar power. It is based on the idea that a phase directly converting/storing this energy flow as an easily usable energy source would allow it to be better exploited.

Further reading on I’MTech: What is renewable energy storage?

To get around this disadvantage, we wanted to find a way to store solar energy as a fuel,” states Antoine Fécant. “This would make it possible to create energy reserves in a form that is already known and usable in various common applications, such as heating, vehicles or in the industrial and transport sectors,” he adds. To achieve this goal, the researcher based his research work on the principle of natural photosynthesis: capturing light energy to convert CO2 and water to more complex carbon molecules that can be used as energy.

In order to artificially transform solar energy into chemical energy, Antoine Fécant and his team, in collaboration with academic actors, developed several families of specific materials. Known as photocatalysts, these materials have been optimized by researchers in terms of their characteristics and structures on a nanometric scale. One of the compounds developed is a family of monolithic materials made from silicon and titanium dioxide, allowing for better use of incident photons through a “nano-mirror” effect. Other families of materials with composite architecture are able to reproduce the energetic processes in multiple complex phases of natural photosynthesis. Lastly, entirely new crystalline structures give greater mobility to the electrical charges needed to convert CO2.

According to Antoine Fécant, “these materials are interesting, but at present, they only allow us to overcome a single obstacle at a time, out of many. Now, we have to work on creating synergy between these new catalyst systems to efficiently perform CO2 photoconversion and reach an energy yield threshold of at least 10% for this means of energy production to be considered viable.” The researcher believes it will still be several decades before this process can be deployed on an industrial scale.

Catalyzing the production of biosourced and fossil fuels

Antoine Fécant has also undertaken research to reduce the environmental impact of the use of conventional fuels and their manufacturing processes. For this, he designed higher-performing catalysts that help to improve the energy efficiency of processes and thereby limit related CO2 emissions. The researcher has also participated in discovering catalysts that increase yields in the Fischer-Tropsch process, a key phase in transforming lignocellulosic biomass to produce advanced biofuels. Furthermore, these fuels could contribute to limiting the aviation sector’s carbon footprint.

By winning the IMT-Académie des Sciences Young Scientist Award, Antoine Fécant hopes to shine a light on research into solar fuel and hopes that “this area will be more highly valued”. Such fuels could truly represent a promising avenue to make better use of solar energy, by controlling its intermittent nature. “Research into these topics needs to be supported in the long term in order to contribute to the paradigm shifts needed for our energy consumption,” concludes the prizewinner.

Rémy Fauvel

[box type=”shadow” align=”” class=”” width=””]

From energy to tires

During his career, Antoine Fécant has also participated in a collaborative project on the production of biosourced compounds. The aim of this project was to design a process to manufacture butadiene, a key molecule in the composition of tires, using non-food plant resources. It is commonly produced using fossil fuels, but researchers have found a way to generate it using lignocellulosic compounds. Project teams have managed to refine a process and associated catalysts, making it possible to transform ethanol into butadiene using condensation. This 10-year-old project is now in its final phases.

[/box]

David Gesbert, winner of the 2021 IMT-Académie des Sciences Grand Prix

EURECOM researcher David Gesbert is one of the pioneers of Multiple-Input Multiple-Output (MIMO) technology, used nowadays in many wireless communication systems. He contributed to the boom in WiFi, 3G, 4G and 5G technology, and is now exploring what could be the 6G of the future. In recognition of his body of work, Gesbert has received the IMT-Académie des Sciences Grand Prix.

I’ve always been interested by research in the field of telecommunications. I was fascinated by the fact that mathematical models could be converted into algorithms used to make everyday objects work,” declares David Gesbert, researcher and specialist in wireless telecommunications systems at EURECOM. Since he completed his studies in 1997, Gesbert has been working on MIMO, a telecommunications system that was created in the 1990s. This technology makes it possible to transfer data streams at high speeds, using multiple transmitters and receivers (such as telephones) in conjunction. Instead of using a single channel to send information, a transmitter can use multiple spatial streams at the same time. Data is therefore transferred more quickly to the receiver. This spatialized system represents a breaking point with previous modes of telecommunication, like the Global System for Mobile Communications (GSM).

It has proven to be an important innovation, as MIMO is now broadly used in WiFi systems and several generations of mobile telephone networks, such as 4G and 5G. After receiving his PhD from École Nationale Supérieure des Télécommunications in 1997, Gesbert completed two years of postdoctoral research at Stanford University. He joined the telecommunications laboratory directed by Professor Emeritus Arogyaswami Paulraj, an engineer who worked on the creation of MIMO. In the early 2000s, the two scientists, accompanied by two students, launched the start-up Iospan Wireless. This was where they developed the first high-speed wireless modem using MIMO-OFDM technology.

OFDM: Orthogonal Frequency-Division Multiplexing

OFDM is a process that improves communication quality by dividing a high-debit data stream into many low-debit data streams. By combining this mechanism with MIMO, it is possible to transfer data at high speeds while making the information generated by MIMO more robust against radio distortion. “These features make it great for use in deploying telecommunications systems like 4G or 5G,” adds the researcher.  

In 2001, Gesbert moved to Norway, where he taught for two years as adjunct professor in the IT department at the University of Oslo. One year later, he published an article in which he described that complex propagation environments favor the functioning of MIMO. “This means that the more obstacles there are in a place, the more the waves generated by the antennas are reflected. The waves therefore travel different paths and interference is reduced, which leads to more efficient data transfer. In this way, an urban environment in which there are many buildings, cars, and other objects will be more favorable to MIMO than a deserted area,” explains the telecommunications expert.  

In 2003, he joined EURECOM, where he became a professor and five years later, head of the Mobile Communications department. There, he has continued his work aiming to improve MIMO. His research has shown him that base stations — also known as relay antennas — could be useful to improve the performance of this mechanism. By using antennas from multiple relay stations far apart from each other, it would be possible to make them work together and produce a giant MIMO system. This would help to eliminate interference problems and optimize the circulation of data streams. Research is still being performed at present to make this mechanism usable.

MIMO and robots

In 2015, Gesbert obtained an ERC Advanced Grant for his PERFUME project. The initiative, which takes its name from high PERfomance FUture Mobile nEtworking, is based on the observation that “the number of receivers used by humans and machines is currently rising. Over the next few years, these receivers will be increasingly connected to the network,” emphasizes the researcher. The aim of PERFUME is to exploit the information resources of receivers so that they work in cooperation, to improve their performance. The MIMO principle is at the heart of this project: spatializing information and using multiple channels to transmit data. To achieve this objective, Gesbert and his team developed base stations attached to drones. These prototypes use artificial intelligence systems to communicate between one another, in order to determine which bandwidth to use or where to place themselves to give a user optimal network access. Relay drones can also be used to extend radio range. This could be useful, for example, if someone is lost on a mountain, far from relay antennas, or in areas where a natural disaster has occurred and the network infrastructure has been destroyed.

As part of this project, the EURECOM professor and his team have performed research into decision-making algorithms. This has led them to develop artificial neuron networks to improve decision-making processes performed by the receivers or base stations desired to cooperate together. With these neuron networks, the devices are capable of quantifying and exploiting the information held by each of themAccording to Gesbert, “this will allow receivers or stations with more information to correct flaws in receivers with less. This idea is a key takeaway from the PERFUME project, which finished at the end of 2020. It indicates that to cooperate, agents like radio receivers or relay stations make decisions based on sound data, which sometimes has to be rejected to let themselves be guided by decisions from agents with access to better information than them. It is a surprising result, and a little counterintuitive.”

Towards the 6th generation of mobile telecommunications technology

“Nowadays, two major areas are being studied concerning the development of 6G,” announces Gesbert. The first relates to ways of making networks more energy efficient by reducing the number of times that transmissions take place, by restricting the amount of radio waves emitted and reducing interference. One solution to achieve these objectives is to use artificial intelligence. “This would make it possible to optimize resource allocation and use radio waves in the best way possible,” adds the expert.

The second concerns applications of radio waves for purposes other than communicating information. One possible use for the waves would be to produce images. Given that when a wave is transmitted, it reflects off a large number of obstacles, artificial intelligence could analyze its trajectory to identify the position of obstacles and establish a map of the receiver’s physical environment. This could, for example, help self-driving cars determine their environment in a more detailed way. With 5G, the target precision for locating a position is around a meter, but 6G could make it possible to establish centimeter-level precision, which is why these radio imaging techniques could be useful. While this 6th-generation mobile telecommunications network will have to tackle new challenges, such as the energy economy and high-accuracy positioning, it seems clear that communication spatialization and MIMO will continue to play a fundamental role.

Rémy Fauvel

Étienne Perret, IMT-Académie des sciences Young Scientist prize

What if barcodes disappeared from our supermarket items? Étienne Perret, a researcher in radio-frequency electronics at Grenoble INP, works on identification technologies. His work over recent years has focused on the development of RFID without electronic components, commonly known as chipless RFID. The technology aims to offer some of the advantages of classical RFID but at a similar cost to barcodes, which are more commonly used in the identification of objects. This research is very promising for use in product traceability and has earned Étienne Perret the 2020 IMT-Académie des sciences Young Scientist Prize.

Your work focuses on identification technologies: what is it exactly?

Étienne Perret: The identification technology most commonly known to the general public is the barcode. It is on every item we buy. When we go to the checkout, we know that the barcode is used to identify objects. Studies estimate that 70% of products manufactured across the world have a barcode, making it the most widely used identification technique. However, it is not the only one, there are other technologies such as RFID (radio frequency identification). It is what is used on contactless bus tickets, ski passes, entry badges for certain buildings, etc. It is a little more mysterious, it’s harder to see what’s behind it all. That said, the idea behind it is the same, regardless of the technology. The aim is to identify an item at short or medium range.

What are the current challenges surrounding these identification technologies?

EP: In lots of big companies, especially Amazon, object traceability is essential. They often need to be able to track a product from the different stages of manufacturing right through to its recycling. Each product therefore has to be able to be identified quickly. However, both of the current technologies I mentioned have limitations as well as advantages. Barcodes are inexpensive, can be printed easily but store very little information and often require human input between the scanner and the code to make sure it is read correctly. What is more, barcodes have to be visible in order to be read, which has an effect on the integrity of the product to be traced.

RFID, on the other hand, uses radio waves that pass through the material, allowing us to identify an object already packaged in a box from several meters away. However, this technology is costly. Although an RFID label only costs a few cents, it is much more expensive than a barcode. For a company that has to label millions of products a year, the difference is huge, in particular when it comes to labeling products that are worth no more than a few cents themselves.

What is the goal of your research in this context?

EP: My aim is to propose a solution in between these two technologies. At the heart of an RFID tag there is a chip that stores information, like a microprocessor. The idea I’m pursuing with my colleagues at Grenoble INP is to get rid of this chip, for economic and environmental reasons. The other advantage that we want to keep is the barcode’s ease of printing. To do so, we base our work on an unusual approach combining conductive ink and geometric labels.

How does this approach work?  

EP: The idea is that each label has a unique geometric form printed in conductive ink. Its shape means that the label reflects radio frequency waves in a unique way. After that, it is a bit like a radar approach: a transmitter emits a wave, which is reflected by its environment, and the label returns the signal with a unique signature indicating its presence. Thanks to a post-processing stage, we can then recover this signature containing the information on the object.

Why is this chipless RFID technology so promising?

EP: Economically speaking, the solution would be much more advantageous than an RFID chip and could even rival the cost of a barcode. Compared to the latter, however, there are two major advantages. First of all, this technology can read through materials, like RFID. Secondly, it requires a simpler process to read the label. When you go through the supermarket checkout, the product has to be at a certain angle so that the code is facing the laser scanner. That is another problem with barcodes: a human operator is often required to carry out the identification and while it is possible to do without, it requires very expensive automated systems. Chipless RFID technology is not perfect, however, and certain limitations must be accepted, such as the reading distance, which is not the same as for conventional RFID which can reach several meters using ultra high frequency waves.

One of the other advantages of RFID is the ability to reprogram it: the information contained in an RFID tag can be changed. Is this possible with the chipless RFID technology you are developing?

EP: That is indeed one of the current research projects. In the framework of the ERC ScattererID project, we are seeking to develop the concept of rewritable chipless labels. The difficulty is obviously that we can’t use electronic components in the label. Instead, we’re basing our approach on CBRAM (conductive-bridging RAM) which is used for specific types of memories. It works by stacking three layers: metal-dielectric material-metal. Imagine a label printed locally with this type of stack. By applying a voltage to the printed pattern we can modify its properties and thus change the information contained in the label.

Does this research on chipless RFID technology have other applications than product traceability and identification?

EP: Another line of research we are looking into is using these chipless labels as sensors. We have shown that we can collect and report information on physical quantities such as temperature and humidity. For temperature, the principle is based on the ability to measure the thermal expansion of the materials that make up the label. The material “expands” by a few tens of microns. The label’s radiofrequency signature changes, and we are able detect these very subtle variations. In another field, this level of precision, obtained using radio waves, which are wireless, allows the label to be located and its movements detected. Based on this principle, we are currently also studying gestural recognition to allow us to communicate with the reader through the label’s movements.

The transfer of this technology to industry seems inevitable: where do you stand on this point?

EP: A recent project with an industrial actor led to the creation of the start-up Idyllic Technology, which aims to market chipless RFID technology to industrial firms. We expect to start presenting our innovations to companies during the course of next year. At present, it is still difficult for us to say where this technology will be used. There’s a whole economic dimension which comes into play, which will be decisive in its adoption. What I can say, though, is that I could easily see this solution being used in places where the barcode isn’t used due to its limitations, but where RFID is too expensive. There’s a place between the two, but it’s still too early to say exactly where.

Gaël richard

Gaël Richard, IMT-Académie des sciences Grand Prix

Speech synthesis, sound separation, automatic recognition of instruments or voices… Gaël Richard‘s research at Télécom Paris has always focused on audio signal processing. The researcher has created numerous acoustic signal analysis methods, thanks to which he has made important contributions to his discipline. These methods are currently used in various applications for the automotive and music industries. His contributions to the academic community and technology transfer have earned him the 2020 IMT-Académie des sciences Grand Prix

Your early research work in the 1990s focused on speech synthesis: why did you choose this discipline?

Gaël Richard: I didn’t initially intend to become a researcher; I wanted to be a professional musician. After my baccalaureate I focused on classical music before finally returning to scientific study. I then oriented my studies toward applied mathematics, particularly audio signal processing. During my Master’s internship and then my PhD, I began to work on speech and singing voice synthesis. In the early 1990s, the first perfectly intelligible text-to-speech systems had just been developed. The aim at the time was to achieve a better sound quality and naturalness and to produce synthetic voices with more character and greater variability.

What research have you done on speech synthesis?

GR: To start with, I worked on synthesis based on signal processing approaches. The voice is considered as being produced by a source – the vocal cords – which passes through a filter – the throat and the nose. The aim is to represent the vocal signal using the parameters of this model to either modify a recorded signal or generate a new one by synthesis. I also explored physical modeling synthesis for a short while. This approach consists in representing voice production through a physical model: vocal cords are springs that the air pressure acts on. We then use fluid mechanics principles to model the air flow through the vocal tract to the lips.

What challenges are you working on in speech synthesis research today?

GR: I have gradually extended the scope of my research to include subjects other than speech synthesis, although I continue to do some work on it. For example, I am currently supervising a PhD student who is trying to understand how to adapt a voice to make it more intelligible in a noisy environment. We are naturally able to adjust our voice in order to be better understood when surrounded by noise. The aim of his thesis, which he is carrying out with the PSA Group, is to change the voice of a radio, navigation assistant (GPS) or telephone, initially pronounced in a silent environment, so that it is more intelligible in a moving car, but without amplifying it.

As part of your work on audio signal analysis, you developed different approaches to signal decomposition, in particular those based on “non-negative matrix factorization”. It was one of the greatest achievements of your research career, could you tell us what’s behind this complex term?

GR: The additive approach, which consists in gradually adding the elementary components of the audio signal, is a time-honored method. In the case of speech synthesis, it means adding simple waveforms – sinusoids – to create complex or rich signals. To decompose a signal that we want to study, such as a natural singing voice, we can logically proceed the opposite way, by taking the starting signal and describing it as a sum of elementary components. We then have to say which component is activated and at what moment to recreate the signal in time.

The method of non-negative matrix factorization allows us to obtain such a decomposition in the form of the multiplication of two matrices: one matrix represents a dictionary of the elementary components of the signal, and the other matrix represents the activation of the dictionary elements over time. When combined, these two matrices make it possible to describe the audio signal in mathematical form. “Non-negative” simply means that each element in these matrices is positive, or that each source or component contributes positively to the signal.

Why is this signal decomposition approach so interesting?

GR: This decomposition is very efficient for introducing initial knowledge into the decomposition. For example, if we know that there is a violin, we can introduce this knowledge into the dictionary by specifying that some of the elementary atoms of the signal will be characteristic of the violin. This makes it possible to refine the description of the rest of the signal. It is a clever description because it is simple in its approach and handling as well as being useful for working efficiently on the decomposed signal.

This non-negative matrix factorization method has led you to subjects other than speech synthesis. What are its applications?

GR: One of the major applications of this technique is source separation. One of our first approaches was to extract the singing voice from polyphonic music recordings. The principle consists in saying that, for a given source, all the elementary components are activated at the same time, such as all the harmonics of a note played by an instrument, for example. To simplify, we can say that non-negative matrix factorization allows us to isolate each note played by a given instrument by representing them as a sum of elementary components (certain columns of the “dictionary” matrix) which are activated over time (certain lines of the “activation” matrix). At the end of the process, we obtain a mathematical description in which each source has its own dictionary of elementary sound atoms. We can then replay only the sequence of notes played by a specific instrument by reconstructing the signal by multiplying the non-negative matrices and setting to zero all note activations that do not correspond to the instrument we want to isolate.

What new prospects can be considered thanks to the precision of this description?

GR: Today, we are working on “informed” source separation which incorporates additional prior knowledge about the sources in the source separation process. I currently co-supervise a PhD student who is using the knowledge of lyrics to help the separation of the isolate singing voices. There are multiple applications: from automatic karaoke generation by removing the detected voice, to music and movie sound track remastering or transformation. I have another PhD student whose thesis is on isolating a singing voice using the simultaneously recorded electroencephalogram (EEG) signal. The idea is to ask a person to wear an EEG cap and focus their attention on one of the sound sources. We can then obtain information via the recorded brain activity and use it to improve the source separation.

Your work allows you to identify specific sound sources through audio signal processing… to the point of automatic recognition?

GR: We have indeed worked on automatic sound classification, first of all through tests on recognizing emotion, particularly fear or panic. The project was carried out with Thales to anticipate crowd movements. Besides detecting emotion, we wanted to measure the rise or fall in panic. However, there are very few sound datasets on this subject, which turned out to be a real challenge for this work. On another subject, we are currently working with Deezer on the automatic detection of content that is offensive or unsuitable for children, in order to propose a sort of parental filter service, for example. In another project on advertising videos with Creaminal, we are detecting key or culminating elements in terms of emotion in videos in order to automatically propose the most appropriate music at the right time.

On the subject of music, is your work used for automatic song detection, like the Shazam application?

GR: Shazam uses an algorithm based on a fingerprinting principle. When you activate it, the app records the audio fingerprint over a certain time. It then compares this fingerprint with the content of its database. Although very efficient, the system is limited to recognizing completely identical recordings. Our aim is to go further, by recognizing different versions of a song, such as live recordings or covers by other singers, when only the studio version is saved in the memory. We have filed a patent on a technology that allows us to go beyond the initial fingerprint algorithm, which is too limited for this kind of application. In particular, we are using a stage of automatic estimation of the harmonic content, or more precisely the sequences of musical chords. This patent is at the center of a start-up project.

Your research is closely linked to the industrial sector and has led to multiple technology transfers. But you also have made several freeware contributions for the wider community.

GR: One of the team’s biggest contributions in this field is the audio extraction software YAAFE. It’s one of my most cited articles and a tool that is regularly downloaded, despite the fact that it dates from 2010. In general, I am in favor of the reproducibility of research and I publish the algorithms of work carried out as often as possible. In any case, it is a major topic of the field of AI and data science, which are clearly following the rise of this discipline. We also make a point of publishing the databases created by our work. That is essential too, and it’s always satisfying to see that our databases have an important impact on the community.