Facial recognition: what legal protection exists?

Over the past decade, the use of facial recognition has developed rapidly for both security and convenience purposes. This biometrics-based technology is used for everything from video surveillance to border controls and unlocking digital devices. This type of data is highly sensitive and is subject to specific legal framework. Claire Levallois-Barth, a legal researcher at Télécom Paris and coordinator of the Values and Policies of Personal Information Chair at IMT provides the context for protecting this data.

What laws govern the use of biometric data?

Claire Levallois-Barth: Biometric data “for the purpose of uniquely identifying a natural person” is part of a specific category defined by two texts adopted by the 27 Member States of the European Union in April 2016, the General Regulation Data Protection Regulation (GDPR) and the Directive for Police and Criminal Justice Authorities. This category of data is considered highly sensitive.

The GDPR applies to all processing of personal data in both private and public sectors.

The Directive for Police and Criminal Justice Authorities pertains to processing carried out for purposes of prevention, detection, investigation, and prosecution of criminal offences or the execution of criminal penalties by competent authorities (judicial authorities, police, other law enforcement authorities). It specifies that biometric data must only be used in cases of absolute necessity and must be subject to provision of appropriate guarantees for the rights and freedoms of the data subject. This type of processing may only be carried out in three cases: when authorized by Union law or Member State law, when related to data manifestly made public by the data subject, or to protect the vital interests of the data subject or another person.

What principles has the GDPR established?

CLB: The basic principle is that collecting and processing biometric data is prohibited due to significant risks of violating basic rights and freedoms, including the freedom to come and go anonymously. There are, however, a series of exceptions. The processing must fall under one of these exceptions (express consent from the data subject, protection of his or her vital interests, conducted for reasons of substantial public interest) and respect all of the obligations established by the GDPR. The key principle is that the use of biometric data must be strictly necessary and proportionate to the objective pursued. In certain cases, it is therefore necessary to obtain the individual’s consent, even when the facial recognition system is being used on an experimental basis. There is also the minimization principle, which systematically asks, “is there any less intrusive way of achieving the same goal?” In any case, organizations must carry out an impact assessment on people’s rights and freedoms.

What do the principles of proportionality and minimization look like in practice?

CLB: One example is when the Provence-Alpes-Côte d’Azur region wanted to experiment with facial recognition at two high schools in Nice and Marseille. The CNIL ruled that the system involving students, most of whom were minors, for the sole purpose of streamlining and securing access, was not proportionate to these purposes. Hiring more guards or implementing a badge system would offer a sufficient solution in this case.

Which uses of facial recognition have the greatest legal constraints?

CLB: Facial recognition can be used for various purposes. The purpose of authentication is to verify whether someone is who he or she claims to be. It is implemented in technology for airport security and used to unlock your smartphone. These types of applications do not pose many legal problems. The user is generally aware of the data processing that occurs, and the data is usually processed locally, by a card for example.

On the other hand, identification—which is used to identify one person within a group—requires the creation of a database that catalogs individuals. The size of this database depends on the specific purposes. However, there is a general tendency towards increasing the number of individuals. For example, identification can be used to find wanted or missing persons, or to recognize friends on a social network. It requires increased vigilance due to the danger of becoming extremely intrusive.

Facial recognition has finally provided a means of individualizing a person. There is no need to identify the individual–the goal is “simply” to follow people’s movements through the store to assess their customer journey or analyze their emotions in response to an advertisement or while waiting at the checkout. The main argument advertisers use to justify this practice is that the data is quickly anonymized, and no record is kept of the person’s face. Here, as in the case of identification, facial recognition usually occurs without the person’s knowledge.

How can we make sure that data is also protected internationally?

CLB: The GDPR applies in the 27 Member States of the European Union which have agreed on common rules. Data can, however, be collected by non-European companies. This is the case for photos of European citizens collected from social networks and news sites. This is one of the typical activities of the company Clearview AI, which has already established a private database of 3 billion photos.

The GDPR lays down a specific rule for personal data leaving European Union territory: it may only be exported to a country ensuring a level of protection deemed comparable to that of the European Union. Yet few countries meet this condition. A first option is therefore for the data importer and exporter to enter into a contract or adopt binding corporate rules. The other option, for data stored on servers on U.S. territory, was to build on the Privacy Shield agreement concluded between the Federal Trade Commission (FTC) and the European Commission. However, this agreement was invalidated by the Court of Justice of the European Union in the summer of 2020. We are currently in the midst of a legal and political battle. And the battle is complicated since data becomes much more difficult to control once it is exported. This explains why certain stakeholders, such as Thierry Breton (the current European Commissioner for Internal Market), have emphasized the importance of fighting to ensure European data is stored and processed in Europe, on Europe’s own terms.

Despite the risks and ethical issues involved, is facial recognition sometimes seen as a solution for security problems?

CLB: It can in fact be a great help when implemented in a way that respects our fundamental values. It depends on the specific terms. For example, if law enforcement officers know that a protest will be held, potentially involving armed individuals, at a specific time and place, facial recognition can prove very useful at that specific time and place. However, it is a completely different scenario if it is used constantly for an entire region and entire population in order to prevent shoplifting.

This summer, the London Court of Appeal ruled that an automatic facial recognition system used by Welsh police was unlawful. The ruling emphasized a lack of clear guidance on who could be monitored and accused law enforcement officers of failing to sufficiently verify whether the software used had any racist or sexist bias. Technological solutionism, a school of thought emphasizing new technology’s capacity to solve the world’s major problems, has its limitations.

Is there a real risk of this technology being misused in our society?

CLB: A key question we should ask is whether there is a gradual shift underway, caused by an accumulation of technology deployed at every turn. We know that video-surveillance cameras are installed in public roads, yet we do not know about additional features that are gradually added, such as facial recognition or behavioral recognition.  The European Convention of Human Rights, GDPR, the Directive for Police and Criminal Justice Authorities, and the CNIL provide safeguards in this area.

However, they provide a legal response to an essentially political problem. We must prevent the accumulation of several types of intrusive technologies that come without prior reflection on the overall result, without taking a step back to consider the consequences. What kind of society do we want to build together? Especially within the context of a health and economic crisis. The debate on our society remains open, as do the means of implementation.

Interview by Antonin Counillon

Decontaminating and treating waste from the steel industry

The manufacture of steel produces mineral residues called steel slags, which are stored in large quantities in slag dumps. These present a dual challenge. On the one hand, they are potentially harmful for the environment and health, and on the other hand they are a useful resource for certain industries. The HYPASS project at Mines Saint-Étienne aims to address both of these issues. Launched in 2018, it offers new solutions for extracting heavy metals and managing pollution from steel slag dumps.

During the steel manufacturing process, iron ore is heated to high temperatures. A lighter residue phase forms on the surface, like whey. When it has cooled, this artificial rock, the slag, is poured into slag dumps which can spread over several hectares.

In France, there are some 30 million tons of steel slag accumulated in dumps. This residue contains heavy metals that are a danger for health and the environment on a large scale because polluting particles can be transmitted through erosion. It is therefore important to limit the diffusion of these particles. 

However, this slag can be used! There are a wide range of fields of application for steel slag today, including the production of concrete and cement, the glass industry, ceramics and even agriculture. Unfortunately, the presence of heavy metals in the slag can be a stumbling block because they can have a negative impact on the spaces where they are used. These metals, such as chrome, molybdenum or tungsten, each have different industrial uses. By efficiently and optimally extracting the heavy metals from the steel slag, these slag dumps could be decontaminated and new ways of reusing the materials could be developed. 

To address these challenges, the HYPASS (HYdrometallurgy and Phyto Management Approaches for Steel Slag management) project, financed by the French National Research Agency and certified by AXELERA, a competitiveness cluster for the chemical and environmental sectors, was launched in 2018. It includes Mines Saint-Étienne.1 The HYPASS methodology has been implemented at the slag dump in Châteauneuf, in the Loire, which is listed as a member of the SAFIR (French Innovation and Research Sites) network. The project aims to develop an innovative technological approach to allow recovery of strategic metals from slag and, at the same time, a more environmentally-friendly management of steel slag dumps.

Extracting heavy metals                                 

The first part of the project consists in extracting the heavy metals from the slag using hydrometallurgy. This technique extracts minerals using a solution during a process called leaching. “Hydrometallurgy dates from the early 20th century and was originally used on ores with a high metal content, such as gold extraction by cyanidation”, explains Fernando Pereira, a researcher on the HYPASS project at Mines Saint-Étienne. “However, over the last 30 years or so, hydrometallurgy has also been increasingly used for the treatment of waste which could be considered as low metal content ores.

Although the first stages of the technique can sometimes differ, it generally involves physical pretreatment of the mineral matrix, dissolution of the metals in acid or alkaline reagents, high-temperature roasting and then purification and refining.

As part of the HYPASS project, the researchers from Mines Saint-Étienne, in association with the French Geological Survey (BRGM), have made several important adjustments in the laboratory to the stages in the hydrometallurgical extraction process. These adjustments had to take account of the efficiency of the techniques as well as their financial aspects, because some processes can prove to be particularly expensive when used at an industrial scale.

The most expensive aspect isn’t the fact of using the different reagents during the leaching stage, but the preliminary grinding and roasting stages [heating to make the metal oxides more soluble] because they require large amounts of energy”, says Fernando Pereira. An important modification to the temperature adjustment during the roasting stage was therefore necessary to optimize the efficiency in relation to cost. Another original adjustment was made in the choice of reagents used to extract the metals. Usual methodologies are based on the use of acids to dissolve the ores, but this implies a non-selective extraction, meaning the different compounds are mixed in the extraction solution. The HYPASS methodology has developed the use of alkaline reagents that allow the specific extraction of strategic metals, thus facilitating their subsequent use as well as that of the mineral matrix. 

Finalization of the phytostabilization method 

The second part of the project consists in stabilizing the slag dump pollution by covering it with plants. However, it is very difficult to grow plants on slag dumps for a number of reasons: the soil of these sites is extremely alkaline, it has no organic matter and few essential elements for growth such as nitrogen and phosphorus. In addition, this soil has poor rainwater retention. It is also highly toxic, due in particular to the presence of a specific form of chromium, known as chromium VI, which is a known carcinogen.

Read more on I’MTech: When plants help us fight pollution

It is therefore a real challenge to grow plants in an environment as hostile as a slag dump. Mathieu Scattolin, in charge of the phytostabilisation part of the HYPASS project, has made important adjustments to the growing conditions of plants in these environments. “Experiments have shown that pH is a key factor for the success of implementing phytostabilization on slag heaps,” says Pereira.

When the soil is too alkaline, certain chemical elements that are important for plant growth (such as manganese, zinc and phosphorus) develop properties that make them less phytodisposable, meaning that less is transferred from the soil to the plant. On the other hand, toxic chemical elements, such as Chrome VI, tend to be assimilated more. 

To resolve these difficulties, a species of fungus called Rhizophagus irregularis was inoculated with the plants. “Wherever we were in the slag dump, the inoculation of Rhizophagus Irregularis led to relatively fast colonization of the root systems”, says the researcher. This symbiosis notably allows the soil to be made less alkaline, thus increasing the phytodisposability of important elements and reducing that of chromium VI. The presence of this fungus in the soil also allowed the supply or organic matter and the increase of the water retention of the soils.

The perfection of the optimal growth conditions was tested in the laboratory and then in experimental sections of Châteauneuf slag dump, which led to a rapid colonization of the root systems. It also led to the launch of a new component in the HYPASS project.

A decision support tool is being developed to compare different scenarios for managing steel slag dumps. Fernando Pereira explains that “this tool will allow us to compare and choose management scenarios for steel slag based on criteria concerning environmental impact, financial costs and support for the ecosystem.” The design work is based on the principles of life cycle analysis (LCA) to allow the tool to provide an estimation of the global environmental impacts for each possible scenario.

In terms of hydrometallurgy, there are still a few phases of development in the laboratory. Kinetic monitoring is envisaged in order to minimize the leaching time of the metals. Metal oxide capture tests and solutions using microwave technology – to see if it is possible to get rid of the roasting stage, which is particularly energy-intensive – are also being developed. The phytostabilization part, on the other hand, seems to be finalized.

A European project is envisaged as a follow-up, including a scale-up and the development of more important laboratory trials. “This would be part of a Horizon Europe-type project” says the researcher, to give a broader perspective of the project.

By Antonin Counillon.

1 Mines Saint-Étienne is part of the HYPASS project through the Environment, City, Society mixed research unit.

hydrogène décarboné carbon-free hydrogen

Carbon-free hydrogen: how to go from gray to green?

The industrial roll-out of hydrogen production only makes sense if it emits little or no carbon dioxide. Researchers at IMT schools are working on various alternatives to the use of fossil fuels, such as electrolysis and photocatalysis of water, plasma pyrolysis of methane, and pyrolysis and gasification of biomass.

Currently, the production of one ton of hydrogen results in 12 tons of CO2 emissions and 95% of the world’s hydrogen is produced from fossil resources. This is what we call gray hydrogen. A situation that is incompatible with the long-term roll-out of the hydrogen industry. Especially since, even if the CO2 emitted by current processes can be captured in a controlled environment, fossil resources will not be able to meet the government’s ambitions for this energy. It is therefore essential to develop other modes of “carbon-free hydrogen” production. Within the Carnot H2Mines network, researchers from the different IMT schools are working on processes that could turn the color palette of today’s hydrogen to green.

From blue to green

One process in line with the French government’s plan published last September is water electrolysis. This consists in separating an H2O molecule into hydrogen and oxygen using an electricity supply. This is a carbon-free solution, provided the electricity comes from a renewable source. But why turn an already clean energy into gas? “Hydrogen enables the storage of large amounts of energy over the long term, which batteries cannot do on a large scale to power an entire network,” explains Christian Beauger,  a researcher in materials science at Mines ParisTech. Gas therefore partly responds to the problem of intermittent renewable energies.

Researchers therefore want to improve the performance of electrolyzers in order to make them more competitive on the market. The goal is to find the best possible balance between yield, lifespan and reduced costs. Electrolyzers are made up of several electrochemical cells containing two electrodes and an electrolyte, as in the case of fuel cells. There are three main families: alkaline solutions with liquid electrolyte, polymer membrane technologies (PEM) and high-temperature systems based on ceramic solid oxide (SOC). Each presents its own problems.

At Mines ParisTech, Christian Beauger’s team is seeking to increase the lifespan of PEM electrolyzers by focusing on the materials used at the anode. “We are developing new catalyst supports in the form of metal oxide aerogels which must be electronically conductive and capable of resisting corrosion in a humid environment, at a temperature of 80°C and subjected to potentials often higher than 2 volts“, says the researcher. Another major problem also affects the materials: the cost of an electrolyzer. The catalyst present on PEM electrodes is iridium oxide, a compound that is too expensive to encourage widespread use of future high-power electrolyzers. For this reason, researchers are working on catalysts based on iridium oxide nanoparticles. This reduces the amount of material and thus the potential cost of the system.

Shedding light on photocatalysis

In the laboratory, an alternative using solar energy to break water molecules into hydrogen and oxygen is also being considered. This is photocatalysis. The semiconductors used can be immersed in water in powder form. Under the effect of the sun’s rays, the electron-hole pairs created provide the energy needed to dissociate the water molecules. However, the energy levels of these charge carriers must be controlled very precisely to be useful.

We form defects in materials that introduce energy levels whose position must be compatible with the energy required for the process,” explains Christian Beauger. This ultra-precise work is delicate to carry out and determines the efficiency of photocatalysis. There is still a long way to go for photocatalysts, the most stable of which hardly exceed 1% in efficiency. But this method of hydrogen production should not be dismissed too quickly, as it is cheaper and easier to set up than a system combining a renewable energy source and an electrolyzer.

Turquoise hydrogen using methane pyrolysis

At Mines ParisTech, Laurent Fulcheri’s team, which specializes in plasma processes, is working on the production of hydrogen not from water, but from the pyrolysis of methane. This technique is still little known in France, but has been widely explored by our German and Russian neighbors. “This process requires electricity, as for the electrolysis of waterbut its main advantage is that it requires about seven times less electricity than water electrolysisIt can therefore produce more hydrogen from the same amount of electricity,” he says.

In practice, researchers crack molecules of methane (formula CH4) at high temperature. “To do this, we use a gas in the plasma state to provide thermal energy to the system. It is the only alternative to provide energy at a temperature above 1,500°C without CO2 emissions and on an industrial scale,” says Laurent Fulcheri. The reaction thus generates two valuable products: hydrogen (25% by mass) and solid carbon black (75% by mass).  The latter is not to be confused with CO2 and is notably used in tire rubber, batteries, cables and pigments. The carbon is thus stored in the materials and can theoretically be recycled ad infinitum. “The production of one ton of carbon black by this method avoids the emission of 3 tons of CO2 compared to current methods”, adds the researcher.

This process has already proven itself across the Atlantic. Since 2012, researchers at Mines ParisTech have been collaborating with the American start-up Monolith Materials, which has developed a technology directly inspired by their work. Its location in Nebraska is not insignificant, as it gives it direct access to wind energy in the heart of the corn belt, a major agricultural area in the United States. The hydrogen produced is then transformed into ammonia to fertilize the surrounding corn farms.

Although the machine is working, the research of Laurent Fulcheri’s team, a major player in the start-up’s R&D, is far from over. “Hydrogen production is the simplest task, because the gas purification processes are fairly mature. On the other hand, the carbon black produced can have drastically different market values depending on its nano-structure. The objective is now to optimize our process in order to be able to generate the different qualities of carbon black that meet the demands of consumer industries,” says the researcher. Indeed, the future of this technology lies in the short-term valorization capacities of the two co-products.

Biomass processing: a local alternative

At IMT Mines Albi, Javier Escudero‘s team is working on thermochemical processes for the transformation of biomass by pyrolysis and gasification. Organic waste is heated to high temperatures in a reactor and converted into small molecules of synthesis gas. The hydrogen, carbon monoxide, methane and CO2 thus produced are captured and then recombined or separated. For example, the CO2 and hydrogen can be used to form synthetic methane for use in natural gas networks.

However, a scientific issue has yet to be solved: “The synthesis gas produced is always accompanied by inorganic molecules and large organic molecules called tars. Although their concentration is low, we still require an additional gas purification stage,” explains Javier Escudero. The result is an increase in processing costs that makes it more difficult to implement this solution on a small scale. The researcher is therefore working on several solutions. For example, the exploration of different catalyst materials that could accelerate certain reactions to separate molecules from waste, while eliminating tars.

This approach could be envisaged as a form of local energy recovery from waste. Indeed, these technologies would enable a small and medium-scale territorial network with reactor sizes adapted to those of the collection centers for green waste, non-recovered agricultural residues, etc. However, there is also a need to clarify the regulations governing this type of facility. “For the moment, the law is not clear on the environmental constraints imposed on such structures, which slows down their development and discourages some manufacturers from really investing in the method,” says the researcher.

There is no shortage of solutions for the production of carbon-free hydrogen. Nevertheless, the economic reality is that in order to be truly competitive, these processes will have to produce hydrogen cheaper than hydrogen from fossil fuels.

By Anaïs Culot

Power-to-gas, when hydrogen electrifies research

Hydrogen is presented as an energy vector of the future. In a power-to-gas system, it serves as an intermediary for the transformation of electricity into synthetic methane. The success of this energy solution depends heavily on its production cost, so IMT researchers are working on optimizing the different processes for a more competitive power-to-gas solution.

Increasing production of renewable energy and reducing greenhouse gas emissions. What if the solution to these two ambitions were to come from a single technology: power-to-gas? In other words, the conversion of electricity into gas. But why? This method allows the storage of surplus electricity produced by intermittent renewable sources that cannot be injected into the grid. The energy is then used to produce hydrogen by electrolysis of water. The gas can then be consumed on site, stored, or used to power hydrogen vehicles. But these applications are still limited. This is why researchers are looking at transforming it into other useful products such as methane (CH4) to supply natural gas networks. What is the potential of this technology?

Costly hydrogen?

The main issue with the development of power-to-gas today is its cost,” says Rodrigo Rivera Tinoco, a researcher in energy systems modeling at Mines ParisTechIf we take into account the cost of producing hydrogen using a low-temperature electrolyzer (PEM), the technology envisaged in power-to-gas installations, a 1 GW hydrogen reactor (almost the power equivalent of a nuclear reactor) would today cost €3 billion.” In September, the French government allocated a budget of €7 billion in aid for the development of the national hydrogen industry. A reduction in the production cost of this gas is therefore necessary. All the more so since power-to-gas technologies are destined to compete with other energy modes on the market.

France wants to reach a cost of €50 per megawatt-hour in 2030. However, a low-cost but short-lasting technology would not be suitable. “To be cost-effective, systems must have a minimum 60,000 to 90,000 hour operating guarantee,” adds Rodrigo Rivera Tinoco. Currently, low-temperature electrolyzers (PEMs) have an operating life of between 30,000 and 40,000 hours. This is where research comes in. The objective is to optimize the energy efficiency of low-cost technology.

Which technology for which use?

Digital modeling enables the identification of the strengths and weaknesses of technologies prior to their installation. “We carry out technical and economic studies on the various water electrolysis processes in order to increase their efficiency and reduce their cost,” says Chakib Bouallou, an expert in digital modeling and energy storage at Mines ParisTech. Several technologies exist, but which one is the most suitable for storing renewable energy? On an industrial scale, low-temperature electrolyzers are mature and seem to respond to the intermittent nature of these energy sources.

However, in the evaluation phase, no technology is being ruled out. As part of the ANR MCEC project and in collaboration with Chimie ParisTech, Chakib Bouallou’s team is currently working on a solution based on molten carbonates that relies on the co-electrolysis of water and CO2. “Using performance curves of materials depending on the current, we estimate the efficiency of the systems under different usage scenarios. The overall analysis of this technology will then be compared to other existing techniques”, says the researcher. Indeed, the adaptability of a system will depend above all on the intended use. To complete these studies, however, experiments are essential.

Minerve: a demonstrator for research purposes

In order to gain the knowledge needed to make the transition to power-to-gas, the Minerve demonstrator was installed in 2018 on the Chantrerie campus north of Nantes. “The platform is first and foremost a research tool that meets the needs of experimentation and data collection. The results are intended to help develop simulation models for power-to-gas technologies,” explains Pascaline Pré, a process engineering researcher at IMT Atlantique. Equipped with solar panels and a wind turbine, Minerve also has an electrolyzer dedicated to the production of hydrogen converted, with CO2 in cylinders, into methane. This is then redistributed to a fuel distribution station for natural gas vehicles (CNG) and used for mobility.  The next step is to integrate CO2 capture technology from the combustion fumes of the site’s heating network boilers to replace the cylinders.

Carbon dioxide is very stable in the air. Turning it into useful products is therefore difficult. Pascaline Pré’s team is developing a new process to capture this gas by absorption using a solvent. The gas collected is purified, dried, compressed and sent to the methane plant. However, some hurdles need to be overcome in order to optimize this approach: “Solvent regeneration consumes a lot of heat. It would be possible to improve the energy efficiency of the device by developing an electrified microwave heating system,” explains the researcher. This concept would also reduce the size of the facilities needed for this process for a future industrial installation.

In the long term, Minerve should also serve as a basis for the study of many issues in the Carnot HyTrend project, which brings together part of the French scientific community to look at hydrogen. Within three years, initial recommendations on the different technologies (electrolysis, methanation, CO2 capture, etc.) will be published to improve the existing situation, as well as studies on the risks and environmental impacts of power-to-gas.

What about power-to-gas-to-power?

It is possible to go beyond current power-to-gas techniques by adding an oxycombustion step. As part of the ANR project FluidStory, Chakib Bouallou’s team focused on modeling a device based on three advanced technologies: low-temperature PEM electrolysis, methanation (allowing the storage of electricity in the form of gas) and oxycombustion power plants for the destocking stages. The first two steps are therefore the same as in a classical power-to-gas infrastructure as mentioned above. The difference here is that oxygen and CH4, obtained respectively by electrolysis of water and methanation, are stored in caves for an indefinite period of time. Thus, when the price of electricity rises, the oxy-fuel combustion process reuses these gases to produce electricity. The CO2 also emitted during this reaction will be reused by the methanation process in the next cycle.

This closed-cycle design therefore allows autonomous operation with regard to the required reagents, which is not possible in conventional power-to-gas setups. However, analyses aimed at better understanding its mechanics and the nature of the interactions between its components have yet to be conducted.

Looking towards power-to-X

The methanation at the heart of the processes mentioned so far is only one example of the transformation of hydrogen in contact with CO2. Indeed, these reactions, called hydrogenation reactions, are used to synthesize many chemicals usually obtained from fossil resources. At IMT Albi Mines, Doan Pham Minh’s team is working on the optimization of these processes. As well as methane production, researchers are targeting the synthesis of liquid biofuels, methanol, ethanol and other carbon-based chemicals. All these “X” compounds are therefore obtained from hydrogen and CO2. Two factors determine the nature of the result: the operating conditions (temperature, pressure, residence time, etc.) and the catalyst used. “This is what drives the reaction to a target productThus, by developing active, selective and stable catalytic materials, we will improve yields in synthesizing the desired product,” the researcher explains.

Methanol is of particular interest to industries. Indeed, this compound is everywhere around us and is used in particular for the surface materials of furniture, in paints, plastics for cars, etc. The same is true for ethanol, biofuels or chemical intermediates of renewable origin. Beyond the role of hydrogen in the national energy mix, the researcher therefore insists on its use by other high-consumption sectors: “It is widely used by the chemical industry and we must be ready to develop competitive and high-performance processes by anticipating future uses of hydrogen and power-to-X.”

By Anaïs Culot

Hydrogen: transport and storage difficulties

Does hydrogen hold the key to the great energy transition to come? France and other countries believe this to be the case, and have chosen to invest heavily in the sector. Such spending will be needed to solve the many issues raised by this energy carrier. One such issue is containers, since hydrogen tends to damage metallic materials. At Mines Saint-Étienne, Frédéric Christien and his teams are trying to answer these questions.

In early September, the French government announced a €7 billion plan to support the hydrogen sector through 2030. With this investment, France has joined a growing list of countries that are betting on this strategy: Japan, South Korea and the Netherlands, among others.

Nevertheless, harnessing this component poses major challenges across the supply chain. Researchers have long known that hydrogen can damage certain materials, starting with metals. “Over a century ago, scientists noticed that when metal is plunged into hydrochloric acid [from chlorine and hydrogen], not only is there a corrosive effect, but the material is embrittled,” explains Frédéric Christien, a researcher at Mines Saint-Étienne1. “This gave rise to numerous studies on the impact of hydrogen on materials. Today, there are standards for the use of metallic materials in the presence of hydrogen. However, new issues are constantly arising, since materials evolve on a regular basis.”

Recovering excess electricity produced but not consumed

For the last three years, the Mines Saint-Étienne researcher has been working on “power-to-gas” research. The goal of this new technology: recover excess electricity rather than losing it, by converting it to gaseous hydrogen through the process of water electrolysis.

Read more on I’MTech: What is hydrogen energy?

Power-to-gas technology involves injecting the resulting hydrogen into the natural gas grid, in a small proportion, so that it can be used as fuel,” explains Frédéric Christien. For individuals, this does not change anything: they may continue to use their gas equipment as usual. But when it comes to transporting gas, such a change has significant repercussions. Hence the question posed to specialists about the durability of materials: what impact may hydrogen have on the steel that makes up the majority of the natural gas transmission network?

Localized deformation

In collaboration with CEA Grenoble (Atomic Energy Commission), the Mines Saint-Étienne researchers have spent three years working on a sample of pipe in order to study the effect of the gas on the material. It is a kind of steel used in the natural gas grid.

The researchers observed a damage mechanism, through the “localization of plastic deformation.” In concrete terms, they stretched the sample so as to replicate the mechanical stress that occurs in the field, due in particular to changes in pressure and temperature. Typically, such an operation results in lengthening the material in a diffuse and homogeneous way, up to a certain point. Here, however, under the effect of hydrogen, all the deformation is concentrated in one place, gradually embrittling the material in the same area, until it cracks. Under normal circumstances, a native oxide layer of the material prevents the hydrogen from penetrating inside the structure. But under the action of mechanical stress, the gas takes advantage of the crack to cause localized damage to the structure.

But it must be kept in mind that these findings correspond to laboratory tests. “We’re a long way from industrial situations, which remain complex,” says Frédéric Christien. “It’s obviously not the same scale. And, depending on where it’s located, the steel is not always the same – some have lining while others don’t and it’s the same thing for heat treatments.” Additional studies will therefore be needed to better understand the effect of hydrogen on the entire natural gas transport system.

The production conundrum

Academic research thus provides insights into the effects of hydrogen on metals under certain conditions. But can it go so far as to create a material that is completely insensitive to these effects? “At this point, finding such a dream material seems unrealistic,” says the Mines Saint-Étienne researcher. “But by tinkering with the microstructures or surface treatments, we can hope to significantly increase the durability of the metals used.”

While the hydrogen sector has big ambitions, it must first resolve a number of issues. Transport and storage safety is one such example, along with ongoing issues with optimizing production processes to make them more competitive. Without a robust and safe network, it will be difficult for hydrogen to emerge as the energy carrier of the future it hopes to be.

By Bastien Contreras.

Frédéric Christien is a researcher at the Georges Friedel Laboratory, a joint research unit between CNRS/Mines Saint-Étienne

Fuel cells in the hydrogen age

Hydrogen-powered fuel cells are recognized as a clean technology because they do not emit carbon dioxide. As part of the energy transition aimed at transforming our modes of energy consumption and production, the fuel cell therefore has a role to play. This article will look at the technologies, applications and perspectives with Christian Beauger, a researcher specialized in material sciences at Mines ParisTech

How do fuel cells work?

Christian Beauger: Fuel cells produce electricity and heat from hydrogen and oxygen reacting at the heart of the system in electrochemical cells. At the anode, hydrogen is oxidized to protons and electrons —the source of the electric current— while at the cathode, oxygen reacts with the protons and electrons to form water. These two electrodes are separated by an electrolyte that is gas-tight and electronically insulated. As an ion conductor, it transfers the protons from the anode to the cathode. To build a fuel cell, several cells must be assembled (a stack). The nominal voltage depends on their number. Their size determines the maximum value of the current produced. The dimensions of the stack (size and number of cells) depend on what the device is to be used for.

Within the stack, the cells are separated by bipolar plates. Their role is to supply each cell with gas, to conduct electrons from one electrode to the other and to cool the system. A fuel cell produces as much electricity as heat, and the temperature of use is limited by the materials used, which vary according to the type of cell.

Finally, the system must be supplied with gas.  Since hydrogen does not exist naturally, it must be produced and stored in pressurized tanks. The oxygen used comes from the air, supplied to the fuel cell by means of a compressor.

What is the difference between a fuel cell and a battery?

CB: The main difference between the two is in their design. A battery is an all-in-one technology whose size depends both on the power and the desired autonomy (stored energy). On the contrary, in a fuel cell, the power and energy aspects are separated. The available energy depends on the amount of hydrogen on board, often stored in a tank. So a fuel cell has very varying levels of autonomy depending on the size of the tanks. The power is linked to the size of the stack. Recharging times are also very different. A hydrogen-powered vehicle can be refueled in minutes, compared to a battery that usually takes several hours to charge.

What are the different types of fuel cells?

CB: There are five major types that differ according to the nature of their electrolytes. Alkaline fuel cells use a liquid electrolyte and have an operating temperature of around 70°C. With my team, we are working on low-temperature fuel cells (80°C) whose electrolyte is a polymer membrane; these are called PEMFCs. PAFCs use phosphoric acid and operate between 150°C and 200°C. MCFCs have an electrolyte based on molten carbonates (600-700°C). Finally, those with the highest temperature, up to 1000 °C, use a solid oxide (SOFC), i.e. a ceramic electrolyte.

Their operating principle is the same, but they do not present the same problems. Temperature influences the choice of materials for each technology, but also the context in which they are used. For example, SOFCs take a long time to reach their operating temperature and therefore do not perform optimally at start-up. If an application requires a fast response, low-temperature technologies should be preferred. Overall, PEMFCs are the most developed.

What are the technical challenges facing fuel cell research?

CB: The objective is always to improve performance, i.e. conversion efficiency and lifespan, while reducing costs.

For the PEMFCs we are working on, the amount of platinum required for redox reactions should be reduced. Limiting the degradation of the catalyst support is another challenge. For this purpose, we are developing new supports based on carbon aerogels or doped metal oxides, with better corrosion resistance under operating conditions. They also provide a better supply of gas (hydrogen and especially air) to the electrodes. We have also recently initiated research on platinum-free catalysts in order to completely move away from this expensive material.

Another challenge is cooling. One option to make more efficient use of the heat produced or to reduce cooling constraints in the mobility sector is to be able to increase the operating temperature of PEMFCs. The Achilles heel here is the membrane. With this in mind, we are working on the development of new composite membranes.

The path for SOFCs is reversed. With a much higher operating temperature, there are fewer kinetic losses at the electrodes and therefore no need for expensive catalysts. On the other hand, the heavy constraints of thermomechanical compatibility limit the choice of SOFC constituent materials. The objective of the research is therefore to lower the operating temperature of SOFCs.

Where do we find these fuel cells today?

CB: PEMFCs are the most widespread and marketed, primarily in the mobility sector. The fuel cell vehicles offered by Hyundai or Toyota, for example, carry a fuel cell of about 120 kW. Electricity is generated on board by the fuel cell hybrid to a battery. The battery preserves the fuel cell during strong accelerations. Indeed, although it is capable of rapidly supplying the required energy, this driving phase accelerates the degradation of the core materials. Fuel cells can also be used as range extenders as originally developed by SYMBIO for Renault electric vehicles. In this case, hydrogen takes over when the battery weakens. The fuel cell can then recharge the battery or power the electric motor.

Another example of commercialization is micro-cogeneration, which makes it possible to use the electricity and heat produced by the fuel cell. In Japan, the Ene Farm program, launched in 2009, has enabled tens of thousands of residential cogeneration systems to be marketed, built using PEMFC or SOFC stacks with a power output of around 700 W.

You mentioned the deterioration of materials and the preservation of fuel cells in use: what about their lifespan?

CB: Lifespan is mainly impacted by the stability of the materials, especially those found in the electrodes or that make up the membrane. The highly oxidizing environment of the cathode can lead to the degradation of the electrodes and indirectly of the membranes. The carbon base of PEMFC electrodes has a particular tendency to oxidize at the cathode. The platinum on the surface can then come away, agglomerate, or migrate towards the membrane to the point of degrading it. Ultimately, the target for vehicles is 5,000 hours of operation and 50,000 hours for stationary applications. We must be at two-thirds of that goal now.

Read more on I’MTech Hydrogen: transport and storage difficulties

What are the prospects for fuel cells now that hydrogen is receiving investment support?

CB: Applications for mobility are still at the heart of the issue. Interest is shifting towards heavy vehicles (buses, trains, light aircraft, ships) for which batteries are insufficient. Alstom’s iLint hydrogen train is being tested in Germany. The aeronautics sector is also conducting tests on small aircraft, but hydrogen-powered wide-body aircraft are not for the immediate future. PEMFCs have the advantage of offering a wide range of power to meet the needs of the various uses from mobile applications (computer, telephone, etc.) to industry usage.

Finally, it is difficult to talk about fuel cells without talking about hydrogen production. It is also often talked about as a means of storing renewable energy. To do this, the reverse process to that used in the fuel cell is required: electrolysis. The water is dissociated into hydrogen and oxygen by applying a voltage between the two electrodes.

Overall, it should be remembered that fuel cell deployment only makes sense if the method of hydrogen production has a low carbon footprint. This is one of the major challenges facing the industry today.

Interview by Anaïs Culot.

For more information about fuel cells:

Hack drone attaque UAVs

Hacked in mid-flight: detecting attacks on UAVs

A UAV (or drone) in flight can fall victim to different types of attacks. At Télécom SudParis, Alexandre Vervisch-Picois is working on a method for detecting attacks that spoof drones concerning their position. This research could be used for both military and civilian applications.

He set out to deliver your package one morning, but it never arrived. Don’t worry, nothing has happened to your mailman. This is a story about an autonomous drone. These small flying vehicles are capable of following a flight path without a pilot, and are now ahead of the competition in the race for the fastest delivery.

While drone deliveries are technically possible, for now they remain the stuff of science fiction in France. This is due to both legal reasons and certain vulnerabilities in these systems. At Télécom SudParis, Alexandre Vervisch-Picois, a researcher specialized in global navigation satellite systems (GNSS), and his team are working with Thales to detect what is referred to as “spoofing” attacks. In order to prevent these attacks, researchers are studying how they work, with the goal of establishing protocol to help detect them.

How do you spoof a drone?

In order to move around independently, a drone must know its position and the direction in which it is moving. It therefore receives continuous signals from a satellite constellation which enables it to calculate the coordinates of its position. These can then be used to follow a predefined flight path by moving through a succession of waypoints until it reaches its destination. However, the drone’s heavy reliance on satellite geolocation to find its way makes it vulnerable to cyber attacks. “If we can succeed in getting the drone to believe it is somewhere other than its actual position, then we can indirectly control its flight path,” Alexandre Vervisch-Picois explains. This flaw is all the more critical given that the drones’ GPS receivers can be easily deceived by false signals transmitted at the same frequency as those of the satellites.

This is what the researchers call a spoofing attack. This type of cyber attack is not new. It was used in 2011 by the Iranian army to capture an American stealth drone that flew over its border. The technique involves transmitting a sufficiently powerful false radio frequency to replace the satellite signal picked up by the drone. This spoofing technique doesn’t cancel the drone’s geolocation capacities as a scrambler would. Instead, it forces the GPS receiver to calculate an incorrect position, causing it to deviate from its flight path. “For example, an attacker who succeeds in identifying the next waypoint can then determine a wrong position to be sent in order to lead the drone right to a location where it can be captured,” the researcher explains.

Resetting the clocks

Several techniques can be used to identify these attacks, but they often require additional costs, both in terms of hardware and energy.Through the DIGUE project (French acronym for GNSS Interference Detection for Autonomous UAV)[1] conducted with Thales Six, Alexandre Vervisch-Picois and his team have developed a method for detecting spoofing attempts. “Our approach uses the GPS receivers present in the drones, which makes this solution less expensive,” says the researcher. This is referred to as the “clock bias” method. Time is a key parameter in satellite position calculations. The satellites have their time base and so does the GPS receiver. Therefore, once the GPS receiver has calculated its position, it measures the “bias”, which is the difference between these two time bases.  However, when a spoofing attack occurs, the researchers observed variations in this calculation in the form of a jump. The underlying reason for this jump is that the spoofer has its own time base, which is different from that of the satellites. “In practice, it is impossible for the spoofer to use the same clock as a satellite. All it can do is move closer to the time base, but we always notice a jump,” Alexandre Vervisch-Picois explains. To put it simply, satellites and spoofer are not set to the same time.

One advantage of this method is that it does not require any additional components or computing power to retrieve the data, since they are already present in the drone. It also does not require expensive signal processing analyses in order to study the information received by the drone–which is another defense method used to determine whether or not a signal originated from a satellite.

But couldn’t the attacker work around this problem by synchronizing with the satellites’ time setting? “It is very rare but still possible in the case of a very sophisticated spoofer. This is a classic example of measures and countermeasures, exemplified in interactions between a sword and a shield. In response to an attack, we set up defense systems and the threats become more sophisticated to bypass them,” the researcher explains. This is one reason why research in this area has so much to offer.

After obtaining successful results in the laboratory, the researchers are now planning to develop an algorithm based on time bias monitoring. This could be implemented on a flying drone for a test with real conditions.

What happens after an attack is detected?

Once the attack has been detected, the researchers try to locate the source of the false signal in order to find the attacker. To do so, they propose using a fleet of connected drones. The idea is to program movements within the fleet in order to determine the angle of arrival for the false signal. One of the drones would then send a message to the relevant authorities in order to stop the spoofer. This method is still in its infancy and is expected to be further developed with Thales in a military context with battlefield situations in which the spoofer must be eliminated. But in the context of a parcel delivery, what could be used to defend a single drone? “There could be a protocol involving rising to a higher altitude to move out of the spoofer’s range, which can reach up to several kilometers. But it would certainly not be as easy to escape its influence,” the researcher says. Another alternative could be to use signal processing methods, but these solutions would increase the costs associated with the device. “If too much of the drone’s energy is required for its protection, we need to ask whether this mode of transport is helpful and perhaps consider other more conventional methods, which are less burdensome to implement,” says Alexandre Vervisch-Picois.

[1] Victor Truong’s thesis research

Anaïs Culot

The Alicem app: a controversial digital authentication system

Laura Draetta, Télécom Paris – Institut Mines-Télécom and Valérie Fernandez, Télécom Paris – Institut Mines-Télécom

[dropcap]S[/dropcap]ome digital innovations, although considered to be of general interest, are met with distrust. A responsible innovation approach could anticipate and prevent such confidence issues.

“Alicem” is a case in point. Alicem is a smartphone app developed by the State to offer the French people a national identity solution for online administrative procedures. It uses face recognition as a technological solution to activate a user account and allow the person to prove their digital identity in a secure way.

After its authorization by decree of May 13, 2019 and the launch of the experimentation of a prototype among a group of selected users a few months later, Alicem was due to be released for the general public by the end of 2019.

However, in July of the same year, La Quadrature du Net, an association for the defense of rights and freedoms on the Internet, filed an appeal before the Council of State to have the decree authorizing the system annulled. In October 2019, the information was relayed in the general press and the app was brought to the attention of the general public. Since then, Alicem has been at the center of a public controversy surrounding its technological qualities, potential misuses and regulation, leading to it being put on hold to dispel the uncertainties.

At the start of the summer of 2020, the State announced the release of Alicem for the end of the autumn, more than a year later than planned in the initial roadmap. Citing the controversy on the use of facial recognition in the app, certain media actors argued that it was still not ready: it was undergoing further ergonomic and IT security improvements and a call to tender was to be launched to build “a more universal and inclusive offer” incorporating, among other things, alternative activation mechanisms to facial recognition.

Controversy as a form of “informal” technology assessment

The case of Alicem is similar to that of other controversial technological innovations pushed by the State such as the Linky meters, 5G and the StopCovid app, and leads us to consider controversy as a form of informal technology assessment that defies the formal techno-scientific assessments that public decisions are based on. This also raises the issue of a responsible innovation approach.

Several methods have been developed to evaluate technological innovations and their potential effects. In France, the Technology Assessment – a form of political research that examines the short- and long-term consequences of innovation – is commonly used by public actors when it comes to technological decisions.

In this assessment method, the evaluation is entrusted to scientific experts and disseminated among the general public at the launch of the technology. The biggest challenge with this method is supporting the development of public policies while managing the uncertainties associated with any technological innovation through evidence-based rationality. It must also “educate” the public, whose mistrust of certain innovations may be linked to a lack of information.

The approach is perfectly viable for informing decision-making when there is no controversy or little mobilization of opponents. It is less pertinent, however, when the technology is controversial. A technological assessment focused exclusively on scholarly expertise runs the risk of failing to take account of all the social, ethical and political concerns surrounding the innovation, and thus not being able to “rationalize” the public debate.

Participation as a pillar of responsible innovation

Citizen participation in technology assessment – whether to generate knowledge, express opinions or contribute to the creation and governance of a project – is a key element of responsible innovation.

Participation may be seen as a strategic tool for “taming” opponents or skeptics by getting them on board or as a technical democracy tool that gives voice to ordinary citizens in expert debates, but it is more fundamentally a means of identifying social needs and challenges upstream in order to proactively take them into account in the development phase of innovations.

In all cases, it relies on work carried out beforehand to identify the relevant audiences (users, consumers, affected citizens etc.) and choose their spokespersons. The definition of the problem, and therefore the framework of the response, depends on this identification. The case of Linky meters is an emblematic example: anti-radiation associations were not included in the discussions prior to deployment because they were not deemed legitimate to represent consumers; consequently, the figure of the “affected citizen” was nowhere to be seen during the discussions on institutional validation but is now at the center of the controversy.

Experimentation in the field to define problems more effectively

Responsible innovation can also be characterized by a culture of experimentation. During experimentation in the field, innovations are confronted with a variety of users and undesired effects are revealed for the first time.

However, the question of experimentation is too often limited to testing technical aspects. In a responsible innovation approach, experimentation is the place where different frameworks are defined, through questions from users and non-users, and where tensions between technical efficiency and social legitimacy emerge.

If we consider the Alicem case through the prism of this paradigm, we are reminded that technological innovation processes carried out in a confined manner – first of all through the creation of devices within the ecosystem of paying clients and designers and then through the experimentation of the use of artifacts already considered stable – inevitably lead to acceptability problems. Launching a technological innovation without participation in its development by the users undoubtedly makes the process faster, but may cost its legitimization and even lead to a loss of confidence for its promoters.

In the case of Alicem, the experiments carried out among “friends and family”, with the aim of optimizing the user experience, could be a case in point. This experimentation was focused more on improving the technical qualities of the app than on taking account of its socio-political dimensions (risk of infringing upon individual freedoms and loss of anonymity etc.). As a result, when the matter was reported in the media it was presented through an amalgamation of face recognition technology use cases and anxiety-provoking arguments (“surveillance”, “freedom-killing technology”, “China”, “Social credit” etc.). Without, however, presenting the reality of more common uses of facial recognition which carry the same risks as those being questioned.

These problems of acceptability encountered by Alicem are not circumstantial ones unique to a specific technological innovation, but must be understood as structural markers of the contemporary social functioning. For, although the “unacceptability” of this emerging technology is a threat for its promoters and a hindrance to its adoption and diffusion, it is above all indicative of a lack of confidence in the State that supersedes the reality of the intrinsic qualities of the innovation itself.

This text presents the opinions stated by the researchers Laura Draetta and Valérie Fernandez during their presentation at the Information Mission on Digital Identity of the National Assembly in December 2019. It is based on the case of the biometric authentication app Alicem, which sparked controversy in the public media sphere from the first experiments.

Laura Draetta, a Lecturer in Sociology, joint holder of the Responsibility for Digital Identity Chair, Research Fellow Center for Science, Technology, Medicine & Society, University of California, Berkeley, Télécom Paris – Institut Mines-Télécom and Valérie Fernandez, Professor of Economics, Holder of the Responsibility for Digital Identity chair, Télécom Paris – Institut Mines-Télécom

This article was republished from The Conversation under the Creative Commons license. Read the original article here.

 

high temperature fuel cell

Turning exhaust gases into electricity, an innovative prototype

Jean-Paul Viricelle, a researcher in process engineering at Mines Saint-Étienne, has created a small high-temperature fuel cell composed of a single chamber. Placed at the exhaust outlet of the combustion process, this prototype could be used to convert unburned gas into energy.

Following the government’s recent announcements about the hydrogen industry, fuel cells are in the spotlight in the energy sector. Their promise is that they could help decarbonize industry and transportation. While hydrogen fuel cells are the stars of the moment, research on technologies of the future is also playing an important role. At Mines Saint-Étienne[1], Jean-Paul Viricelle has developed a new high-temperature fuel cell – over 600°C – called a mono-chamber. It is unique in that it can be fueled not only by hydrogen, but by a mixture of more complex gases, representative of the real mixtures at the exhaust outlet of a combustion process. “The idea is not to compete with a conventional hydrogen fuel cell, since we’ll never reach the same yield, but to recover energy from the mixtures of unburned gas at the outlet of any combustion process,” explains the researcher, who specializes in developing chemical sensors.

This would help reduce the amount of gaseous waste resulting from combustion. These compounds also contribute to air pollution. For example, unburned hydrocarbons could be recovered, cleaned and oxidized to generate electricity. Why hydrocarbons? Because they are composed of carbon and hydrogen atoms, the fuel of choice for conventional fuel cells. One of the most advanced studies on the concept of mono-chamber cells was published in 2007 by Japanese researchers who recovered a gaseous mixture at the exhaust outlet of a scooter engine. Even though it was not very powerful, the experiment proved the feasibility of such a system. Jean-Paul Viricelle has created a prototype that seeks to improve this concept. It uses a synthetic gaseous mixture, which is closer to the real composition of exhaust gases. It also optimizes the fuel cell’s architecture and materials to enhance its performance.

The inner workings of the cell

A fuel cell consists of three components: two electrodes, which are hermetically separated by an electrolyte. It is fueled by a gas (hydrogen) and air. Once inside the cell, an electrochemical reaction occurs at each electrode. This results in the exchange of electrons, which generates the electricity supplied by the cell. The architecture of conventional cells is often constrained, which prevents them from being reduced in size. To overcome these obstacles, Jean-Paul Viricelle has opted for a mono-chamber cell, composed of a single compartment. In this concept, hydrogen cannot be directly used as a fuel since it is too reactive when it comes into contact with air and could blow up the device! That’s why the researcher fuels it with a gaseous mixture of hydrocarbons and air. What does this new structure change compared to a conventional cell? “The electrolyte no longer acts like a seal, as it does in a conventional cell, and serves only as an ionic conductor. But the cathode, and the anode come into contact with all the reactants. So they must be perfectly selective so that they only react with one of the gases,” explains Jean-Paul Viricelle.

In practice, a synthetic exhaust gas is sent over the cell. The electrochemical reaction that follows is standard: an oxidation at the anode and a reduction at the cathode generate electricity. This cell works at a high temperature (over 600°C), a condition that is essential for the electrolyte to transport the electrons. And, since the gas mixture that fuels the cell gives off heat, a small enough cell could be self-sustaining in terms of heat. This means that once it has been initiated by an eternal heat source, it would become self-sufficient in terms of heat. Moreover, laboratory tests have shown a power density equivalent to that of conventional cells. However, significant gas flows must be sent over this demonstrator that measures 4 cm² with a low energy conversion rate as a result. Stacking mono-chamber cells, rather than a single cell, as is the case for this prototype, could help resolve this problem.

A wide range of applications

For now, markets are more conducive to low-temperature fuel cells in order to prevent the devices from overheating. Nevertheless, the concept developed by Jean-Paul Viricelle presents a number of benefits. It opens the door to new geometries for placing two electrodes on the same surface. Such design flexibility facilitates a move towards miniaturization. Small high-temperature fuel cells could, for example, fuel industrial microreactors. The cells could also be integrated at the exhaust outlet of an engine to convert unburned hydrocarbons into electricity, as in the Japanese experiment. In this case, the energy recovered would power electronic devices and other sensors within the vehicle. More broadly, this energy conversion device could help respond to efficiency issues for any combustion system, including power plants. Despite all this, mono-chamber fuel cells remain concepts that have not made their way out of research laboratories. Why is this? Up to now, there has been a greater focus on hydrogen production than on energy recovery.

By Anaïs Culot.

[1] Jean-Paul Viricelle is the director of the Georges Friedel Laboratory, a joint research unit between Mines Saint-Étienne and CNRS.

Étienne Perret, IMT-Académie des sciences Young Scientist prize

What if barcodes disappeared from our supermarket items? Étienne Perret, a researcher in radio-frequency electronics at Grenoble INP, works on identification technologies. His work over recent years has focused on the development of RFID without electronic components, commonly known as chipless RFID. The technology aims to offer some of the advantages of classical RFID but at a similar cost to barcodes, which are more commonly used in the identification of objects. This research is very promising for use in product traceability and has earned Étienne Perret the 2020 IMT-Académie des sciences Young Scientist Prize.

Your work focuses on identification technologies: what is it exactly?

Étienne Perret: The identification technology most commonly known to the general public is the barcode. It is on every item we buy. When we go to the checkout, we know that the barcode is used to identify objects. Studies estimate that 70% of products manufactured across the world have a barcode, making it the most widely used identification technique. However, it is not the only one, there are other technologies such as RFID (radio frequency identification). It is what is used on contactless bus tickets, ski passes, entry badges for certain buildings, etc. It is a little more mysterious, it’s harder to see what’s behind it all. That said, the idea behind it is the same, regardless of the technology. The aim is to identify an item at short or medium range.

What are the current challenges surrounding these identification technologies?

EP: In lots of big companies, especially Amazon, object traceability is essential. They often need to be able to track a product from the different stages of manufacturing right through to its recycling. Each product therefore has to be able to be identified quickly. However, both of the current technologies I mentioned have limitations as well as advantages. Barcodes are inexpensive, can be printed easily but store very little information and often require human input between the scanner and the code to make sure it is read correctly. What is more, barcodes have to be visible in order to be read, which has an effect on the integrity of the product to be traced.

RFID, on the other hand, uses radio waves that pass through the material, allowing us to identify an object already packaged in a box from several meters away. However, this technology is costly. Although an RFID label only costs a few cents, it is much more expensive than a barcode. For a company that has to label millions of products a year, the difference is huge, in particular when it comes to labeling products that are worth no more than a few cents themselves.

What is the goal of your research in this context?

EP: My aim is to propose a solution in between these two technologies. At the heart of an RFID tag there is a chip that stores information, like a microprocessor. The idea I’m pursuing with my colleagues at Grenoble INP is to get rid of this chip, for economic and environmental reasons. The other advantage that we want to keep is the barcode’s ease of printing. To do so, we base our work on an unusual approach combining conductive ink and geometric labels.

How does this approach work?  

EP: The idea is that each label has a unique geometric form printed in conductive ink. Its shape means that the label reflects radio frequency waves in a unique way. After that, it is a bit like a radar approach: a transmitter emits a wave, which is reflected by its environment, and the label returns the signal with a unique signature indicating its presence. Thanks to a post-processing stage, we can then recover this signature containing the information on the object.

Why is this chipless RFID technology so promising?

EP: Economically speaking, the solution would be much more advantageous than an RFID chip and could even rival the cost of a barcode. Compared to the latter, however, there are two major advantages. First of all, this technology can read through materials, like RFID. Secondly, it requires a simpler process to read the label. When you go through the supermarket checkout, the product has to be at a certain angle so that the code is facing the laser scanner. That is another problem with barcodes: a human operator is often required to carry out the identification and while it is possible to do without, it requires very expensive automated systems. Chipless RFID technology is not perfect, however, and certain limitations must be accepted, such as the reading distance, which is not the same as for conventional RFID which can reach several meters using ultra high frequency waves.

One of the other advantages of RFID is the ability to reprogram it: the information contained in an RFID tag can be changed. Is this possible with the chipless RFID technology you are developing?

EP: That is indeed one of the current research projects. In the framework of the ERC ScattererID project, we are seeking to develop the concept of rewritable chipless labels. The difficulty is obviously that we can’t use electronic components in the label. Instead, we’re basing our approach on CBRAM (conductive-bridging RAM) which is used for specific types of memories. It works by stacking three layers: metal-dielectric material-metal. Imagine a label printed locally with this type of stack. By applying a voltage to the printed pattern we can modify its properties and thus change the information contained in the label.

Does this research on chipless RFID technology have other applications than product traceability and identification?

EP: Another line of research we are looking into is using these chipless labels as sensors. We have shown that we can collect and report information on physical quantities such as temperature and humidity. For temperature, the principle is based on the ability to measure the thermal expansion of the materials that make up the label. The material “expands” by a few tens of microns. The label’s radiofrequency signature changes, and we are able detect these very subtle variations. In another field, this level of precision, obtained using radio waves, which are wireless, allows the label to be located and its movements detected. Based on this principle, we are currently also studying gestural recognition to allow us to communicate with the reader through the label’s movements.

The transfer of this technology to industry seems inevitable: where do you stand on this point?

EP: A recent project with an industrial actor led to the creation of the start-up Idyllic Technology, which aims to market chipless RFID technology to industrial firms. We expect to start presenting our innovations to companies during the course of next year. At present, it is still difficult for us to say where this technology will be used. There’s a whole economic dimension which comes into play, which will be decisive in its adoption. What I can say, though, is that I could easily see this solution being used in places where the barcode isn’t used due to its limitations, but where RFID is too expensive. There’s a place between the two, but it’s still too early to say exactly where.