lithium-ion battery

What is a lithium-ion battery?

The lithium-ion battery is one of the best-sellers of recent decades in microelectronics. It is present in most of the devices we use in our daily lives, from our mobile phones to electric cars. The 2019 Nobel Prize in Chemistry was awarded to John Goodenough, Stanley Wittingham, and Akira Yoshino, in recognition of their initial research that led to its development. In this new episode of our “What’s?” series, Thierry Djenizian explains the success of this component. Djenizian is a researcher in microelectronics at Mines Saint-Étienne and is working on the development of new generations of lithium-ion batteries.

 

Why is the lithium-ion battery so widely used?

Thierry Djenizian: It offers a very good balance between storage and energy output. To understand this, imagine two containers: a glass and a large bottle with a small neck. The glass contains little water but can emptied very quickly. The bottle contains a lot of water but will be slower to empty. The electrons in a battery behave like the water in the containers. The glass is like a high-power battery with a low storage capacity, and the bottle a low-power battery with a high storage capacity. Simply put, the lithium-ion battery is like a bottle but with a wide neck.

How does a lithium-ion battery work?

TD: The battery consists of two electrodes separated by a liquid called electrolyte. One of the two electrodes is an alloy containing lithium. When you connect a device to a charged battery, the lithium will spontaneously oxidize and release electrons – lithium is the chemical element that releases electrodes most easily. The electrical current is produced by the electrons flowing between the two electrodes via an electrical circuit, while the lithium ions from the oxidation reaction migrate through the electrolyte into the second electrode.

The lithium ions will thus be stored until they no longer have any available space or until the first electrode has released all its lithium atoms. The battery is then discharged and you simply apply a current to force the reverse chemical reactions and have the ions migrate in the other direction to return to their original position. This is how lithium-ion technology works: the lithium ions are inserted into and extracted from the electrodes reversibly depending on whether the battery is charging or discharging.

What were the major milestones in the development of the lithium-ion battery?

TD: Wittingham discovered a high-potential material composed of titanium and sulfur capable of reacting with lithium reversibly, then Goodenough proposed the use of metal alloys. Yoshino marketed the first lithium-ion battery using graphite and a metal oxide as electrodes, which considerably reduced the size of the batteries.

What are the current scientific issues surrounding lithium-ion technology?

TD: One of the main trends is to replace the liquid electrolyte with a solid electrolyte. It is best to avoid the presence of flammable liquids, which also present risks of leakage, particularly in electronic devices. If the container is pierced, this can have irreversible consequences on the surrounding components. This is particularly true for sensors used in medical applications in contact with the skin. Recently, for example, we developed a connected ocular lens with our colleagues from IMT Atlantique. The lithium-ion battery we used included a solid polymer-based electrolyte because it would be unacceptable for the electrolyte to come into contact with the eye in the event of a problem. Solid electrolytes are not new. What is new is the research work to optimize them and make them compatible with what is expected of lithium-ion batteries today.

Are we already working on replacing the lithium-ion battery?

TD: Another promising trend is to replace the lithium with sodium. The two elements belong to the same family and have very similar properties. The difference is that lithium is extracted from mines at a very high environmental and social cost. Lithium resources are limited. Although lithium-ion batteries can reduce the use of fossil fuels, if their extraction results in other environmental disasters, they are less interesting. Sodium is naturally present in sea salt. It is therefore an unlimited resource that can be extracted with a considerably lower impact.

Can we already do better than the lithium-ion battery for certain applications?

TD: It’s hard to say. We have to change the way we think about our relationship to energy. We used to solve everything with thermal energy. We cannot use the same thinking for electric batteries. For example, we currently use lithium-ion button cell batteries for the internal clocks of our computers. For this very low energy consumption, a button cell has a life span of several hundred years, while the computer will probably be replaced in ten years. A 1mm² battery may be sufficient. The size of energy storage devices needs to be adjusted to suit our needs.

Read on I’MTech: Towards a new generation of lithium batteries?

We also have to understand the characteristics we need. For some uses, a lithium-ion battery will be the most appropriate. For others, a battery with a greater storage capacity but a much lower output may be more suitable. For still others, it will be the opposite. When you use a drill, for example, it doesn’t take four hours to drill a hole, nor do you need a battery that will remain charged for several days. You want a lot of power, but you don’t need a lot of autonomy. “Doing better” than the lithium-ion battery, perhaps simply means doing things differently.

What does it mean to you to have a Nobel Prize awarded to a technology that is at the heart of your research?

TD:  They are names that we often mention in our scientific publications, because they are the pioneers of the technologies we are working on. But beyond that, it is great to see a Nobel Prize awarded to research that means something to the general public. Everyone uses lithium-ion batteries on a daily basis, and people recognize the importance of this technology. It is nice to know that this Nobel Prize in Chemistry is understood by many people.

vagues scélérates, rogue waves

Optics as a key to understanding rogue waves

Rogue waves are powerful waves that erupt suddenly. They are rare, but destructive. Above all, they are unpredictable. Surprisingly, researchers have been able to better understand these fascinating waves by studying similar phenomena in fiber optic lasers.

 

Before scientists began measuring and observing them, rogue waves had long been perceived as legends. They can reach a height of 30 meters, forming a wall of water facing ships. French explorer Dumont d’Urville faced one such wave in the southern hemisphere. More recently, in 1995, the commander of the transatlantic liner Queen Elizabeth II described a wave as a “solid wall of water”, adding that he felt he was sailing the boat “straight into the cliffs of Dover”. These waves are also a major cause of containers lost at sea.

A genius idea

But the rare and unpredictable nature of these mysterious waves makes them difficult to study and nearly impossible to predict. Tests have been conducted in specially designed pools, but the resulting waves are much smaller and do not sufficiently reflect reality. Theoretical models, on the other hand, are not accurate enough.

However, in 2007, Daniel Solli and his team from the University of California had the genius idea of comparing the propagation of waves with that of light pulses in optical fibers. Waves and light pulses are in fact both waves and are subject to the same laws of physics. And it is much easier to study light pulses, since all the parameters can be easily controlled: wavelength, intensity, the type of fiber used, etc. Furthermore, we can study thousands of pulses per second, making it possible to observe rare events.

Real time

Now, a group of researchers including Arnaud Mussot from IMT Lille Douai has published an article on this subject in the scientific journal Nature Physics, describing the research on the analogies between oceanography and optics to better understand rogue waves.

Many experiments have been conducted in optics,” Arnaud Mussot explains. “For these experiments, we sent laser pulses into optical fibers and we analyzed the speed of these pulses at the output of the fiber. These observations were made in real time, over extremely short periods of time–a few tens of femtoseconds, which is much less than a billionth of a millisecond.

These experiments showed that there are several ways to create rogue waves. One of the most effective methods is to make several waves crash into each other. But only wave collisions from certain angles, directions and amplitudes generate rogue wave phenomena. However, these experiments do not provide all the answers, since some of the more powerful rogue waves predicted by theorists have not yet been observed.

Predicting rogue waves

In the long term, a better understanding of rogue waves should make it possible to better predict them and prevent certain accidents. “Certain companies are currently developing radar that can map the state of the sea, which can be taken on a boat,” Arnaud Mussot explains. “This data is sent into a computing program that predicts what will happen in the sea in the next minutes. The ship can then modify its course to avoid a rogue wave or mitigate its effects. The more we improve our knowledge and calculations, the more we will succeed in predicting these waves in advance.

This research also benefits other fields, such as optics. It offers a better understanding of the start-up of high-power lasers and certain tasks the lasers perform, for which the characteristics vary as the power of the laser increases.

Article written in French by Cécile Michaut for I’MTech.

 

nuclear

Nuclear: a multitude of scenarios to help imagine the future of the industry

Article written in partnership with The Conversation.
By Stéphanie Tillement and Nicolas Thiolliere, IMT Atlantique.

[divider style=”normal” top=”20″ bottom=”20″]

[dropcap]N[/dropcap]uclear energy plays a very important role in France – where 75 % of the country’s electricity is produced using this energy – and raises crucial questions concerning both its role in the future electricity mix and methods for managing the associated radioactive materials and waste.

But the discussions held as part of the recent Multiannual Energy Plan (PPE) gave little consideration to these issues.  The public debate on radioactive material and waste management set to launch on Wednesday 17 April may provide an opportunity to delve deeper into these issues.

Fifty-eight pressurized water reactors (PWR), referred to as “second generation,” are currently in operation in France and are responsible for producing the majority of the country’s electricity. Nineteen of these reactors, were put into operation before 1981 and will reach their design service life of 40 years over the next three years.

The future of the nuclear industry represents a crucial question, which will likely have a lasting effect on all industry stakeholders  – electricity producers, distribution system operators, energy providers and consumers. This means that all French citizens will be affected.

Imagining the future of the nuclear industry

Investment decisions regarding the electricity sector can establish commitments for the country that will last tens, or even hundreds of years, and this future clearly remains uncertain. Against this backdrop, forward-looking approaches can help plan for the future and identify, even partially, the possible consequences of the choices we make today.

Such an approach involves first identifying then analyzing the different possible paths for the future in order to asses them and possibly rank them.

The future of the nuclear industry includes a relatively wide range of possibilities: it varies according to the evolution of installed capacity and the pace with which new technologies (EPR technology, referred to as “third generation” or RNR technology, referred to as “fourth generation”) are deployed.

Given the great degree of uncertainty surrounding the future of the nuclear industry, research relies on simulation tools;  the “electronuclear scenario” represents one of the main methods. Little-known by the general public, it differs from the energy scenarios used to inform discussions for the Multiannual Energy Plan (PPE). The nuclear scenario represents a basic building block of the energy scenario and is based on a detailed description of the nuclear facilities and the physics that controls them. In practice, energy and nuclear scenarios can complement one another, with the outcomes of the former representing hypotheses for the latter, and the results of the latter making it possible to analyze in greater detail the different paths set out by the former.

The aim of studying the nuclear scenario is to analyze one or several development paths for nuclear facilities from a materials balance perspective, meaning tracking the evolution of radioactive materials (uranium, plutonium, fission products etc.) in nuclear power plants. In general, it relies on a complex modeling tool that manages a range of scales, both spatial (from elementary particle to nuclear power plants) and temporal (from less than a microsecond for certain nuclear reactions to millions of years for certain types of nuclear waste).

Based on a precise definition of a power plant and its evolution over time, the simulation code calculates the evolution of the mass of each element of interest, radioactive or otherwise, across all nuclear facilities. This data can then serve as the basis for producing more useful data concerning the management of resources and recycled materials, radiation protection etc.

Emergence of new players

Long reserved to nuclear institutions and operators, the scenario-building process has gradually opened up to academic researchers, driven largely by the Bataille Law of 1991 and the Birraux Law of 2006 concerning radioactive waste management. These laws resulted in a greater diversity of players involved in producing, assessing and using scenarios.

In addition to the traditional players (EDF and CEA in particular), the CNRS and academic researchers (primarily physicists and more recently economists) and representatives of civil society have taken on these issues by producing their own scenarios.

There have been significant developments on the user side as well. Whereas prior to the Bataille and Birraux Laws, nuclear issues were debated almost exclusively between nuclear operators and the executive branch of the French government, giving rise to the image of issues confined to “ministerial secrecy,” these laws have allowed for these issues to be addressed in more public and open forums, in particular in the academic and legislative spheres.

They also created National Assessment Committees, composed of twelve members selected based on proposals by the Académie des Sciences, the Académie des Sciences Morales et Politiques, and the  French Parliamentary Office for the Evaluation of Scientific and Technological Choices. The studies of scenarios produced by institutional, industrial and academic players are assessed by these committees and outlined in annual public reports sent to the members of the French parliament.

Opening up this process to a wider range of players has had an impact on the scenario-building practices, as it has led to a greater diversity of scenarios and hypotheses on which they are based.

A variety of scenarios

The majority of the scenarios developed by nuclear institutions and industry players are “realistic” proposals according to these same parties: scenarios based on feedback from the nuclear industry.  They rely on technology already developed or in use and draw primarily on hypotheses supporting the continued use of nuclear energy, with an unchanged installed capacity.

The scenarios proposed by the research world tend to give less consideration to the obligation of “industrial realism,” and explore futures that disrupt the current system. Examples include research carried out on transmutation in ADS (accelerator-driven reactors), design studies for MSR (molten salt reactors), which are sometimes described as “exotic” reactors,  and studies on the thorium cycle. A recent study also analyzed the impact of recycling the plutonium in reactors of the current technology, and as part of a plan to significantly reduce, or even eliminate, the portion of nuclear energy by 2050.

These examples show that academic scenarios are often developed with the aim of deconstructing the dominant discourse in order to foster debate.

Electronuclear scenarios clearly act as “boundary objects.” They provide an opportunity to bring together different communities of stakeholders, with various knowledge and different, and sometimes opposing, interests in order to compare their visions for the future, organize their strategies and even cooperate. As such, they help widen the “scope of possibilities” and foster innovation through the greater diversity of scenarios produced.

Given the inherent uncertainties of the nuclear world, this diversity also appears to be a key to ensuring more robust and reliable scenarios, since discussing these scenarios forces stakeholders to justify the hypotheses, tools and criteria used to produce them, which are often still implicit.

Debating scenarios

However, determining how these various scenarios can be used to support “informed” decisions remains controversial.

The complexity of the system to be modeled requires simplifications, thus giving rise to biases which are difficult to quantify in the output data. These biases affect both technical and economic data and are often rightly used to dispute the results of scenarios and the recommendations they may support.

How, then, can we ensure that the scenarios produced are robust? There are two opposing strategies:  Should we try to build simple or simplified scenarios in an attempt to make them understandable to the general public (especially politicians), at the risk of neglecting important variables and leading to “biased” decisions? Or, should we produce scenarios that are complex, but more loyal to the processes and uncertainties involved, at the risk of making them largely “opaque” to decision-makers, and more broadly, to the citizens invited to take part in the public debate?

As of today, these scenarios are too-little debated outside of expert circles. But let us hope that the public debate on radioactive waste management will provide an excellent opportunity to bring these issues to a greater extent into the “scope of democracy,” in the words of Christian Bataille.

[divider style=”dotted” top=”20″ bottom=”20″]

Stéphanie Tillement, Sociologist, IMT Atlantique – Institut Mines-Télécom and Nicolas Thiolliere, Associate Professor in reactor physics, IMT Atlantique – Institut Mines-Télécom

This article has been republished from The Conversation under a Creative Commons license. Read the original article (in French).

KM3Net

KM3NeT: Searching the Depths of the Sea for Elusive Neutrinos

The sun alone produces more than 64 billion neutrinos per second and per cm2 that pass right through the Earth. These elementary particles of matter are everywhere, yet they remain almost entirely elusive. The key word is almost… The European infrastructure KM3NeT, currently being installed in the depths of the Mediterranean Sea, has been designed to detect the extremely faint light generated by neutrino interactions in the water. Researcher Richard Dallier from IMT Atlantique offers insight on the major scientific and technical challenge of searching for neutrinos. 

 

These “little neutral particles” are among the most mysterious in the universe. “Neutrinos have no electric charge, very low mass and move at a speed close to that of light. They are hard to study because they are extremely difficult to detect,” explains Richard Dallier, member of the KM3NeT team from the Neutrino group at Subatech laboratory[1]. “They interact so little with matter that only one particle out of 100 billion encounters an atom!”

Although their existence was first postulated in the 1930s by physicist Wolfgang Pauli, it was not confirmed experimentally until 1956 by American physicists Frederick Reines and Clyde Cowan–awarded the Nobel Prize in Physics for this discovery in 1995. This was a small revolution for particle physics. “It could justify the excess matter that enabled our existence. The Big Bang created as much matter as it did antimatter, but they mutually annihilated each other very quickly. So, there should not be any left! We hope that studying neutrinos will help us understand this imbalance,” Richard Dallier explains.

The Neutrino Saga

While there is still much to discover about these bashful particles, we do know that neutrinos exist in three forms or “flavors”: the electron neutrino, the muon neutrino and the tau neutrino. The neutrino is certainly an unusual particle, capable of transforming over the course of its journey. This phenomenon is called oscillation: “The neutrino, which can be generated from different sources, including the Sun, nuclear power plants and cosmic rays, is born as a certain type, takes on a hybrid form combining all three flavors as it travels and can then appear as a different flavor when it is detected,” Richard Dallier explains.

The oscillation of neutrinos was first revealed in 1998 with the Super-Kamiokande experiment, a Japanese neutrino observatory, which also received the Nobel Prize in Physics in 2015. This change in identity is key: it provides indirect evidence that neutrinos indeed have a mass, albeit extremely low.  However, another mystery remains: what is the mass hierarchy of these 3 flavors? The answer to this question would further clarify our understanding of the Standard Model of particle physics.

The singularity of neutrinos is a fascinating area of study. An increasing number of observatories and detectors dedicated to the subject are being installed in great depths, where the combination of darkness and concentration of matter is ideal. Russia has installed a detector at the bottom of Lake Baikal and the United States in the South Pole. Europe, on the other hand, is working in the depths of the Mediterranean Sea. This phenomenon of fishing for neutrinos first began in 2008 with the Antares experiment, a unique type of telescope that can detect even the faintest light crossing the depths of the sea. Antares then made way for KM3NeT, with improved sensitivity to orders of magnitude. This experiment has brought together nearly 250 researchers from around 50 laboratories and institutes, including four French laboratories. In addition to studying the fundamental properties of neutrinos, the collaboration aims to discover and study the astrophysical sources of cosmic neutrinos.

Staring into the Universe

KM3NeT is actually comprised of two gigantic neutrino telescopes currently being installed at the bottom of the Mediterranean Sea. The first, called ORCA (Oscillation Research with Cosmics in the Abyss), is located off the coast of Toulon in France. Submerged at a depth of nearly 2,500 meters, it will eventually be composed of 115 strings attached to the seabed. “Optical detectors are placed on each of these 200-meter flexible strings, spaced 20 meters apart: 18 spheres measuring 45 centimeters spaced 9 meters apart each contain 31 light sensors,” explains Richard Dallier, who is participating in the construction and installation of these modules. “This unprecedented density of detectors is required in order to study the properties of the neutrinos: their nature, oscillations and thus their masses and classification thereof. The sources of neutrinos ORCA will focus on are the Sun and the terrestrial atmosphere, where they are generated in large numbers by the cosmic rays that bombard the Earth.”

KM3Net

Each of KM3Net’s optical modules is comprised of 31 photomultipliers to detect the light produced by interactions between neutrinos and matter. These spheres with a diameter of 47 centimeters (including a covering of nearly 2 cm!) were designed to withstand pressures of 350 bar.

The second KM3Net telescope is ARCA (Astroparticles Research with Cosmics in the Abyss). It will be located 3,500 meters under the sea in Sicily. There will be twice as many strings, which will be longer (700 meters) and spaced further apart (90 meters), but with the same number of sensors. With a volume of over one km3—hence the name KM3Net for km3 Neutrino Telescope—ARCA will be dedicated to searching for and observing the astrophysical sources of neutrinos, which are much rarer. A total of over 6,000 optical modules containing over 200,000 light sensors will be installed by 2022. These numbers make KM3Net the largest detector in the world, in equal position with its cousin, IceCube, in Antarctica.

Both ORCA and ARCA operate on the same principle based on the indirect detection of neutrinos. When a neutrino encounters an atom of matter, such as an atom of air, water, or the Earth itself—since they easily travel right through it—the neutrino can “deposit” its energy there. This energy is then instantly transformed into one of the three particles of the flavor of the neutrino: an electron, a muon or a tau. This “daughter” particle then continues its journey on the same path as the initial neutrino and at the same speed, emitting light in the atmosphere it is passing through, or interacting itself with atoms in the environment and disintegrating into other particles, which will also radiate as blue light.

Since this is all happening at the speed of light, an extremely short light pulse of a few nanoseconds occurs. If the environment the neutrino is passing through is transparent–which is the case for the water in the Mediterranean Sea–and the path goes through the volume occupied by ORCA or ARCA, the light sensors will detect this extremely faint flash,” Richard Dallier explains. Therefore, if several sensors are touched, we can reconstitute every direction of the trajectory and determine the energy and nature of the original neutrino. But regardless of the source, the probability of neutrino interactions remains extremely low: with a volume of 1 km3, ARCA only expects to detect a few neutrinos originating from the universe.

Neutrinos: New Messengers Revealing a Violent Universe

Seen as cosmic messengers, these phantom particles open a window onto a violent universe. “Among other things, the study of neutrinos will provide a better understanding and knowledge of cosmic cataclysms,” says Richard Dallier. The collisions of black holes and neutron stars, supernovae, and even massive stars that collapse, produce gusts of neutrinos that bombard us without being absorbed or deflected in their journey. This means that light is no longer the only messenger of the objects in the universe.

Neutrinos have therefore strengthened the arsenal of “multi-messenger” astronomy, involving the cooperation of a maximum number of observatories and instruments throughout the world. Each wavelength and particle contributes to the study of various processes and additional aspects of astrophysical objects and phenomena. “The more observers and objects observed, the greater the chances of finding something,” Richard Dallier explains. And in these extraterrestrial particles lies the possibility of tracing our own origins with greater precision.

[1] SUBATECH is a research laboratory co-operated by IMT Atlantique, the Institut National de Physique Nucléaire et de Physique des Particules (IN2P3) of CNRS, and the Université de Nantes.

Article written for I’MTech (in French) by Anne-Sophie Boutaud

MOx

MOx Strategy and the future of French nuclear plants

Nicolas Thiollière, a researcher in nuclear physics at IMT Atlantique, and his team are assessing various possibilities for the future of France’s nuclear power plants. They seek to answer the following questions: how can the quantity of plutonium in circulation in the nuclear cycle be reduced? What impacts will the choice of fuel — specifically MOx — have on nuclear plants? To answer to these questions, they are using a computer simulator that models different scenarios: CLASS (Core Library for Advanced Scenario Simulation).

 

Today, the future of French nuclear power plants remains uncertain. Many reactors are coming to the end of their roughly forty-year lifespan. New proof of concept trials must be carried out to extend their duration of use. In his quest to determine which options are viable, Nicolas Thiollière, a researcher at IMT Atlantique with the Subatech laboratory, and his team are conducting nuclear scenario studies. In this context, they are working to assess future options for France’s nuclear power plants.

Understanding the nuclear fuel cycle

The nuclear fuel cycle encompasses all the steps in the nuclear energy process, from uranium mining to managing the radioactive waste. UOx fuel, which stands for uranium oxide, represents roughly 90% of the fuel used in the 58 pressurized water reactors in French nuclear power plants. It consists of uranium enriched in uranium 235. After a complete cycle, i.e. after it has passed through the reactor, the radiation generates approximately 4% fission products (matter used to produce energy), 1% plutonium and 0.1% minor actinides. In most countries, these components are not recycled; this is referred to as an open cycle.

However, France has adopted a partially closed cycle, in which the plutonium is reused. Therefore, the plutonium is not considered waste, despite being the element with the highest radiotoxicity. In other words, it is the most hazardous of the nuclear cycle materials in the medium-to-long-term, for thousands to millions of years. France has a plutonium recycling system based on MOx fuel, which stands for “mixed oxide”. “MOx is fuel that consists of 5% to 8% plutonium produced during the UOx combustion cycle and supplemented by the depleted uranium,” Nicolas Thiollière explains.

The use of this new mixed fissile material helps slightly reduce the consumption of uranium resources. In the nuclear power plants in France, MOx fuel represents approximately 10% of total fuel—the rest is UOx fuel. After a MOx irradiation cycle, 3% to 5% of the remaining plutonium is not considered waste and could theoretically be reused. In practice, however, it currently is not reused. MOx fuel must therefore be stored for processing, forming a strategic reserve of plutonium. “We estimate that there were approximately 350 tons of plutonium in the French nuclear cycle in 2018. The majority is located in used UOx and MOx fuel,” explains Nicolas Thiollière. Thanks to their simulations, the researchers estimate that with an open cycle—without recycling using MOx —there would be approximately 16% more plutonium in 2020 than is currently projected with a closed cycle.

The fast neutron reactor strategy

In the current pressurized water reactors, the natural uranium must first be enriched. 8 mass units of natural uranium are needed to produce 1 unit of enriched uranium. In the reactor, only 4% of the fuel’s mass undergoes fission and produces energy. Directly or indirectly fissioning the entire mass of the natural uranium would result in resource gains by a factor 20. In practice, this involves multi-recycling the plutonium produced by the neutron absorption of uranium during irradiation to continuously incinerate it. One of the possible industrial solutions is the use of Fast Neutron Reactors (FNR). FNRs rely on the use of fast neutrons that offer the advantage of fissioning the plutonium more effectively, thus enabling it to be recycled several times.

Historically, the development of MOx fuel was part of a long-term industrial plan based on multi-recycling the plutonium used in FNRs. Now, a completely different story is in the making. Although three FNRs were used in France beginning in the 1960s (Rapsodie, Phénix and Superphénix), the permanent suspension decision for Superphénix by the Council of State in 1997 signaled the end of the expansion of FNRs in France. The three pioneer reactors were shut down, and no FNRs have been operated since. However, the Act of 2006 on the sustainable development of radioactive materials and waste revitalized the project by setting a goal for commissioning an FNR prototype by 2020. The ASTRID project, led by the CEA (The French Alternative Energies and Atomic Energy Commission), took shape.

Recently, funding for this reactor with its pre-industrial power level (approximately 600 megawatts compared to 1 gigawatt for an industrial reactor) has been scaled down. The power of the ASTRID concept, significantly reduced to 100 megawatts, redefines its status and will probably extend the industrial development potential of FNRs beyond 2080. “Without the perspective of deploying FNRs, the MOx strategy is called into question. The industrial processing of plutonium is a cumbersome and expensive process resulting in limited gains in terms of the inventory and resource if the MOx is only used in the current reactors,” Nicolas Thiollière observes.

In this context of uncertainty regarding the deployment of FNRs and as plutonium accumulates in the cycle, Nicolas Thiollière and his team are asking a big question. Under what circumstances can nuclear power plants multi-recycle (recycle more than once) plutonium using the current reactors and technology to stabilize inventory? In practice, major research and development efforts would be required to define a new type of fuel assembly compatible with multi-recycling. “Many theoretical studies have already been carried out by nuclear industry operators, revealing a few possibilities to explore,” the researcher explains.

Nuclear scenario studies: simulating different courses of action for nuclear power plants

Baptiste Mouginot and Baptiste Leniau, former researchers with Subatech laboratory, developed the cycle simulator CLASS (Core Library for Advanced Scenarios Simulations) from 2012 to 2016. This modeling tool can scientifically assess future strategies for the fuel cycle. It can therefore be used to calculate and monitor the inventory and flow of materials over time for all nuclear plant units (fuel fabrication and separation factories, power stations, etc.) based on hypotheses for developing factories and the installed nuclear capacity.

In the context of her PhD work, supervised by Nicolas Thiollière, Fanny Courtin studied the objective of stabilizing the quantity of plutonium recycled in the reactors of nuclear plants by 2100. One of the constraints in the simulation was that all the power plant reactors needed to use the current pressurized water technology. Based on this criterion, the CLASS tool carried out thousands of simulations to identify possible strategies. “The condition for stabilizing the quantity of plutonium and minor actinides would be to have 40 to 50% of the pressurized water reactors dedicated to the multi-recycling of plutonium,” Nicolas Thiollière explains. “However, the availability of plutonium in these scenarios would also mean a regular decrease in the nuclear capacity, to a level between 0 to 40% of the current capacity.” This effect is caused by minor actinides, which are not recycled and therefore build up. The plants must therefore incinerate plutonium to stabilize the overall inventory. However, incinerating plutonium implies reducing the power plants’ capacity at an equivalent rate.

On these charts, each line represents a possible course of action. In purple, the researchers indicated the scenarios that would meet a mass stabilization condition for the plutonium and minor actinides in circulation (top). These scenarios imply reducing the thermal energy of the power plants over the course of the century (bottom).

 

The researchers also tested the condition of minimizing the inventory of plutonium and minor actinides. In addition to increasing the number of reactors used for multi-recycling, the researchers showed that the scenario for reducing the quantity of plutonium and minor actinides in the cycle would imply phasing out nuclear power in a few years. Reducing the stock of plutonium is tantamount to reducing the fuel inventory, which would mean no longer having enough to supply all the nuclear power plants. “Imagine you have 100 units of plutonium to supply 10 power plants. At the end of the cycle, you would only have 80 units remaining and would only be able to supply 8 plants. You would have to close 2. In recycling the 80 units, you would have even less in the output, etc.,” Nicolas Thiollière summarizes. In practice, it therefore seems impractical to implement major R&D efforts to recycle MOx without FNRs, considering that this solution implies abandoning the technology in the short term.

The industrial feasibility of these options must first be validated by extensive safety studies. However, in the initial stages, the scenarios involving the stabilization of plutonium and minor actinides seem compatible with diversifying France’s electricity mix and rolling out renewable energy to replace nuclear sources. Industrial feasibility studies looking at both safety issues and costs are all the more valuable in considering the uncertainty involved in deploying fast neutron reactors and the future of the nuclear sector. It is important to address the uncertainty of these economic and safety issues before deploying a strategy that would radically change France’s nuclear power plants.

Also read on I’MTech: What nuclear risk governance exists in France?

Article written for I’MTech (in French) by Anaïs Culot

hydrogen

What is hydrogen energy?

In the context of environmental and energy challenges, hydrogen energy offers a clean alternative to fossil fuels. Doan Pham Minh, a chemist and environmental engineering specialist at IMT Mines Albi, explains why this energy is so promising, how it works and the prospects for its development.

 

What makes hydrogen so interesting?

Doan Pham Minh: The current levels of interest in hydrogen energy can be explained by the pollution problems linked to carbon-based energy sources. They emit fine particles, toxic gases and volatile organic compounds. This poses societal and environmental problems that must be remedied. Hydrogen offers a solution because it does not emit any pollutants. In fact, hydrogen reacts with oxygen to “produce” energy in the form of heat or electricity. The only by-product of this reaction is water. It can therefore be considered clean energy.

Is hydrogen energy “green”? 

DPM: Although it is clean, it cannot be called “green”. It all depends on how the dihydrogen molecule is formed. Today, around 96% of hydrogen is produced from fossil raw materials, like natural gas and hydrocarbon fractions from petrochemicals. In these cases, hydrogen clearly is not “green”. The remaining 4% is produced through the electrolysis of water. This is the reverse reaction of the combustion of hydrogen by oxygen: water is separated into oxygen and hydrogen by consuming electricity. This electricity can be produced by nuclear power stations, coal-fired plants or by renewable energies: biomass, solar, hydropower, wind, etc. The environmental footprint of the hydrogen produced by electrolysis depends on the electricity’s origin.

How is hydrogen produced from biomass?

DPM: In terms of the chemistry, it is fairly similar to the production of hydrogen from oil. Biomass is also made up of hydrocarbon molecules, but with a little more oxygen. At IMT Mines Albi, we work a great deal on thermo-conversion. Biomass, in other words wood, wood waste and agricultural residues, etc. is heated without oxygen, or in a low-oxygen atmosphere. The biomass is then split into small molecules and primarily produces carbon monoxide and the dihydrogen. Biomass can also be transformed into biogas through anaerobic digestion by microorganisms. This biogas can then be transformed into a mixture of carbon monoxide and dihydrogen. An additional reforming step uses water vapor to transform the carbon monoxide into carbon dioxide and hydrogen. We work with industrial partners like Veolia to use the CO2 and prevent the release of greenhouse gas. For example, it can be used to manufacture sodium bicarbonate, which neutralizes the acidic and toxic gases from industrial incinerators. The production of hydrogen from biomass is therefore also very clean, making it a promising technique.

Read more on I’MTech: Vabhyogaz uses our waste to produce hydrogen

Why is it said that hydrogen can store electricity?

DPM: Storing electricity is difficult. It requires complex batteries, used on a large scale. A good strategy is therefore to transform electricity into another energy that is easier to store. Through the electrolysis of water, electrical energy is used to produce dihydrogen molecules. This hydrogen can easily be compressed, transported, stored and distributed before being reused to produce heat or generate electricity. This is a competitive energy storage method compared to mechanical and kinetic solutions, such as dams and flywheels.

Why is it taking so long to develop hydrogen energy?

DPM: In my opinion, it is above all a matter of will. We see major differences between different countries. Japan, for example, is very advanced in the use of hydrogen energy. South Korea, the United States and China have also invested in hydrogen technologies. Things are beginning to change in certain countries. France now has a hydrogen plan, launched last June by Nicolas Hulot. However, it remains a new development, and it will take time to establish the infrastructures. We currently only have around 20-25 hydrogen fuel stations in France, which is not many. Hydrogen vehicles remain expensive: a Toyota Mirai sedan costs €78,000 and a hydrogen bus costs approximately €620,000. These vehicles are much more expensive than the equivalent in vehicles with diesel or gas engines. Nevertheless, these prices are expected to decline in coming years, because the number of hydrogen vehicles is still very limited. Investment programs must be established, and they take time to implement.

Read more on I’MTech:

Mihai Miron

Ioan-Mihai Miron: Magnetism and Memory

Ioan Mihai Miron’s research in spintronics focuses on new magnetic systems for storing information. The research carried out at Spintec laboratory in Grenoble is still young, having begun in 2011. However, it already represents major potential in addressing the current limits facing technology in terms of our computers’ memory. The research also offers a solution to problems experienced by magnetic memories until now, which have prevented their industrial development. Ioan-Mihai Miron received the 2018 IMT-Académie des sciences Young Scientist Award for his groundbreaking and promising research. 

 

Ioan-Mihai Miron’s research is a matter of memory… and a little architecture too. When presenting his work on the design of new nanostructures for storing information, the researcher from Spintec* uses a three-level pyramid diagram. The base represents broad and robust mass memory. Its large size enables it to store large amounts of information, but it is difficult to access. The second level is the central memory, which is not as big but faster to access. It includes the information required to launch programs. Finally, the top of the pyramid is cache memory, which is much smaller but more easily accessible. “The processor only works with this cache memory,” the researcher explains. “The rest of the computer system is there to retrieve information lower down in the pyramid as fast as possible and bring it up to the top.

Of course, computers do not actually contain pyramids. In microelectronics, this memory architecture takes the form of thousands of microscopic transistors that are responsible for the central and cache memories. They work as switches, storing the information in binary format and either letting the current circulate or blocking it. With the commercial demand for miniaturization, transistors have gradually reached their limit. “The smaller the transistor, the greater the stand-by consumption,” Ioan-Mihai Miron explains. This is why the goal is now for the types of memory located at the top of the pyramid to rely on new technologies based on storing information at the electronic level. By modifying the current sent into magnetic material, the magnetization can be altered at certain points. “The material’s electrical resistance will be different based on this magnetization, meaning information is being stored,” Ioan-Mihai Miron explains. In simpler terms, a high electrical resistance corresponds to one value, a low resistance to another, which forms a binary system.

In practical terms, information is written in these magnetic materials by sending two perpendicular currents, one from above and one from below the material. The point of intersection is where the magnetization is modified. While this principle is not new, it still is not currently used for cache memory in commercial products. Pairing magnetic technologies with this type of data storage has remained a major industrial challenge for almost 20 years. “Memory capacities are still too low in comparison with transistors, and miniaturizing the system is complicated,” the researcher explains. These two disadvantages are not offset by the energy savings that the technology offers.

To compensate for these limitations, the scientific community has developed a simplified geometry of these magnetic architectures. “Rather than intersecting two currents, a new approach has been to only send a single linear path of current into the material,” Ioan-Mihai Miron explains. “But while this technique solved the miniaturization and memory capacity problems, it created others.” In particular, writing the information involves applying a strong electric current that could damage the element where the information is stored. “As a result, the writing speed is not sufficient. At 5 nanoseconds, it is slower than the latest generations of transistor-based memory technology.

Electrical geometry

In the early 2010s, Ioan-Mihai Miron’s research opened major prospects for solving all these problems. By slightly modifying the geometry of the magnetic structures, he demonstrated the possibility of writing at speeds in under a nanosecond. And the same size offers a greater memory capacity. The principle is based on the use of a current sent into a plane that is parallel to the layers of the magnetized material, whereas previously the current had been perpendicular. This difference makes the change in magnetization faster and more precise. The technology developed by Ioan-Mihai Miron offers still more benefits: less wear on the elements and the elimination of writing errors. It is called SOT-MRAM, for Spin-Orbit Torque Magnetic Random Access Memory. This technical name reflects the complexity of the effects at work in the layers of electrons of the magnetic materials exposed to the interactions of the electrical currents.

The nanostructures developed by Ioan-Mihai Miron and his team are opening new prospects for magnetic memories.

 

The progressive developments of magnetic memories may appear minimal. At first glance, a transition from two perpendicular currents to one linear current to save a few nanoseconds seems to be only a minor advance. However, the resulting changes in performance offer considerable opportunities for industrial actors. “SOT-MRAM has only been in existence since 2011, yet all the major microelectronics businesses already have R&D programs on this technology that is fresh out of the laboratory,” says Ioan-Mihai Miron. SOT-MRAM is perceived as the technology that is able to bring magnetic technologies to the cache memory playing field.

The winner of the 2018 IMT – Académie des Sciences 2018 Young Scientist award seeks to remain realistic regarding the industrial sector’s expectations for SOT-MRAM. “Transistor-based memories are continuing to improve at the same time and have recently made significant progress,” he notes. Not to mention that these technologies have been mature for decades, whereas SOT-MRAM has not yet passed the ten-year milestone of research and sophistication. According to Ioan-Mihai Miron, this technology should not be seen as a total break with previous technology, but as an alternative that is gradually gaining ground, albeit rapidly and with significant competitive opportunities.

But there are still steps to be made to optimize SOT-MRAM and have it integrated into our computer products. These steps may take a few years. In the meantime, Ioan-Mihai Miron is continuing his research on memory architectures, while increasingly entrusting SOT-MRAM to those who are best suited to transferring it to society. “I prefer to look elsewhere rather than working to improve this technology. What interests me is discovering new capacities for storing information, and these discoveries happen a bit by chance. I therefore want to try other things to see what happens.

*Spintec is a mixed research unit of CNRS, CEA, Université Grenoble Alpes.

[author title=”Ioan-Mihai Miron: a young expert in memory technology” image=”https://imtech-test.imt.fr/wp-content/uploads/2018/11/mihai.png”]

Ioan-Mihai Miron is a researcher at the Spintec laboratory in Grenoble. His major contribution involves the discovery of the reversal of magnetization caused by spin orbit coupling. This possibility provides significant potential for reducing energy consumption and increasing the reliability of MRAM, a new type of non-volatile memory that is compatible with the rapid development of the latest computing processors. This new memory should eventually come to replace SRAM memories alongside processors.

Ioan-Mihai Miron is considered a world expert, as shown by the numerous citations of his publications (over 3,000 citations in a very short period of time). In 2014 he was awarded the ERC Starting Grant. His research has also led to several patents and contributed to creating the company Antaios, which won the Grand Prix in the I-Lab innovative company creation competition in 2016. Fundraising is currently underway, demonstrating the economic and industrial impacts of the work carried out by the winner of the 2018 IMT-Académie des Sciences Young Scientist award.[/author]

Audrey Francisco Bosson

Audrey Francisco-Bosson, particle tracker

Audrey Francisco-Bosson has just won a L’Oréal-UNESCO For Women in Science Scholarship. This well-deserved award is in recognition of the young researcher’s PhD work in fundamental physics, carried out at the Subatech laboratory at IMT Atlantique. By exploring the furthest depths of matter through the eyes of the ALICE detector of the Large Hadron Collider (LHC) at CERN, Audrey Francisco-Bosson tracks particles in order to better understand the mysterious quark-gluon plasma.

 

How can matter be reproduced to represent its state at the origin of the universe? 

Audrey Francisco-Bosson: At our level, all matter is made up of atoms, the nuclei of which are composed of protons and neutrons. Inside these protons and neutrons, there are quarks bound together by gluons responsible for what we call “strong interaction.” The Large Hadron Collider (LHC) at CERN allows us to break atoms apart in order to study this strong interaction. When heavy nuclei collide with one another, the energy released is enough to release these quarks. What we end up with is a state of matter in which the quarks and the gluons are no longer bound together: the quark-gluon plasma. This state corresponds to that of the universe a few micro-seconds after the Big Bang: the temperature is 100,000 times higher than that of the sun’s core

What do you look at in the plasma?

AFB: The plasma itself has a very short lifetime: over a billion times shorter than a nanosecond. We cannot observe it. We can, however, observe the particles that are produced in this plasma. When they cool down, the quarks and gluons which were released in the plasma join together to form new particles. We measure their energy, momentum, charge and mass in order to identify and characterize them. All of these aspects provide us with information about the plasma. Since there are lots of different particles, it’s important to specialize a bit. For my PhD thesis I concentrated on the J/ψ particle.

Audrey Francisco-Bosson, winner of a 2018 L’Oréal-Unesco For Women in Science Scholarship. Photo: Fondation L’Oréal/Carl Diner.

 

What is special about the J/ψ particle?

AFB: Researchers have been interested in it for a long time since it has been identified as a good probe for measuring the temperature of the plasma. It is composed of a pair of quarks, which break apart above a certain temperature. Researchers had historically suspected that by looking at whether or not the pair had split apart, it would be possible to derive the temperature of the quark-gluon plasma. In practice, it turned out to be a bit more complicated than that. But the J/ψ particle is still used as a probe for the plasma. For my purposes I used it to deduce information, but about its viscosity rather than temperature.

How do you use J/ψ to deduce the viscosity of the quark-gluon plasma?

AFB: It’s important to understand that there are huge pressure variations in the environment we’re observing. The particles do not all have the same characteristics, and importantly, they aren’t all the same weight. They are thus divided up according to the pressure difference. Since the J/ψ is quite heavy, observing how it moves allows us to observe the flow of the plasma. As in a river, objects won’t travel at the same speed depending on their weight. By combining these observations of J/ψ particles with those of other particles, we deduce the viscosity properties of the plasma. That’s how it was proved that the quark-gluon plasma doesn’t behave like a gas— as we had thought — but like an inviscid fluid.

Does your scientific community still have any big questions about the quark-gluon plasma that the J/ψ particle could help answer?

AFB: One of the big questions is finding out at what moment this fluid characteristic is reached. That means that we can use the laws of fluid mechanics, and those of hydrodynamics in particular, to describe it. More generally, all this research makes it possible to test the properties and laws of quantum chromodynamics. This theory describes the strong interaction that binds the quarks. By testing this theory, we can assess whether the model used to describe matter is correct.

You are going to start working at Yale University in the USA in the coming weeks. What kind of research will you be carrying out there?

AFB: I’ll be working on the results of the STAR detector, which is located at the heart of the RHIC collider. It’s similar to the LHC ALICE detector but with different collision energies. The two detectors are complementary, so they allow us to compare different results in order to study variations between one energy and another and deduce new information about the plasma. For my part, the idea will also be to analyze collision data, like I did with ALICE. I’ll also work on developing new sensors. It’s an important task for me since I studied physical engineering before beginning my PhD thesis. I like to really understand how a detector works before using it. That’s also why I worked on a new sensor for ALICE during my PhD thesis which will be installed on the detector in 2021.

 

Also read on I’MTech

silicon

Silicon and laser: a brilliant pairing!

Researchers at Télécom ParisTech, in partnership with the University of California, Santa Barbara (UCSB), have developed new optical sources. These thermally-stable, energy-saving lasers offer promising potential for silicon photonics. These new developments offer numerous opportunities for improving very high-speed transmission systems, like datacomms and supercomputers. Their results were published in Applied Physics Letters last summer, a journal edited by AIP Publishing.

 

Silicon photonics is a field that is aiming to revolutionize the microelectronics industry and communication technologies. It combines two of the most significant inventions: the silicon integrated circuit and the semiconductor laser. Integrating laser functions into silicon circuits opens up a great many perspectives, allowing data to be transferred quickly over long distances compared to conventional electronic solutions, while benefiting from silicon’s large-scale production efficiency. But there’s a problem: silicon is not a good at emitting light. The laser emissions are therefore achieved using materials from column III of the famous periodic table and one element from column V, specifically, boron or gallium with arsenic or antimony.

Researchers from Télécom ParisTech and the University of California, Santa Barbara (UCSB) have recently presented a new technology for preparing these III-V components by growing them directly on silicon. This technological breakthrough enables researchers to obtain components with remarkable properties in terms of power output, power supply and thermal robustness. The results show that these sources have more stability in the presence of interfering reflections—a critical aspect in producing low-cost communication systems without an optical isolator. Industrial giants such as Nokia Bell Labs, Cisco, Apple and major digital stakeholders like Google and Facebook have high hopes for this technology. It would allow them to develop the next generation of extremely high-speed optical systems.

The approach currently used in the industry is based on thermally adhering a semiconductor laser (developed with a III-V material) to a structured silicon substrate to direct the light. Thermal adhesion does not optimize costs and is not easily replaced, since silicon and III-V elements are not naturally compatible. However, this new technology will pave the way for developing laser sources directly on silicon, a feat that is much more difficult to achieve than for other components (modulator, guides, etc.) For several years now, silicon has become an essential component in microelectronics. And these new optical sources on silicon will help the industry adapt its manufacturing processes without changing them and still respond to the current challenges: produce higher-speed systems that are cost-effective, compact and offer energy savings.

This technological breakthrough is the result of collaboration between Frédéric Grillot, a researcher at Télécom ParisTech, and John Bowers, a researcher at the UCSB. The work of Professor Bowers’ team contributed to developing technology that produced the first “hybrid III-V on silicon” laser with Intel in 2006. In 2007, it won the “ACE Award” (Annual Creativity in Electronics) for the most promising technology. The collaboration between John Bowers and Frédéric Grillot and his team is one of the few that exist outside the United States.

 

 

fundamental physics

From springs to lasers: energy’s mysterious cycle

In 1953, scientists theorized the energy behavior of a chain of springs and revealed a paradox in fundamental physics. Over 60 years later, a group of researchers from IMT Lille Douai, CNRS and the universities of Lille and Ferrara (Italy) has succeeded in observing this paradox. Their results have greatly enhanced our understanding of physical nonlinear systems, which are the basic ingredients required for detecting exoplanets, navigating driverless cars and the formation of big waves in the ocean. Arnaud Mussot, a physicist and member of the partnership, explains the process and the implications of the research, published in Nature Photonics on April 2, 2018.

 

The starting point for your work was the Fermi-Pasta-Ulam-Tsingou problem. What is that?

Arnaud Mussot: The name refers to the four researchers who wanted to study a complex problem in the 1950s. They were interested in observing the behavior of masses connected by springs. They used 64 of them in their experiment. With a chain like this, each spring’s behavior depends on that of the others, but in a non-proportional manner – what we call “nonlinear” in physics. A theoretical study of the nonlinear behavior of such a large system of springs required them to use a computer. They thought that the theoretical results the computer produced would show that when one spring is agitated, all the springs begin to vibrate until the energy spreads evenly to the 64 springs.

Is that what happened?

AM: No. To their surprise, the energy spread throughout the system and then returned to the initial spring. It was then redispersed into the springs and then again returned to the initial point of agitation, and so on. These results from the computer completely contradicted their prediction of energy being evenly and progressively distributed, known as an equipartition of energy.  Since then, these results have been called the “Fermi-Pasta-Ulam-Tsingou paradox” or “Fermi-Pasta-Ulam-Tsingou recurrence”, referring to the recurring behavior of the system of springs. However, since the 1950s, other theoretical research has been carried out. This research has shown that by allowing a system of springs to vibrate for a very long time, equipartition is achieved.

Why is the work you refer to primarily theoretical and not experimental?

AM: In reality, the vibrations in the springs are absorbed by many external factors. It could be friction with the air. In fact, experiments carried out to observe this paradox have not only concerned springs, but all nonlinear oscillation systems, such as laser beams in fiber optics. In this case, the vibrations are mainly absorbed by impurities in the glass that composes the fiber. In all these systems, energy losses due to external factors prevent the observation of anything beyond the first recurrence. The system returns to its initial state, but then it is difficult to make it return to this state a second time. Yet it is in this step that new and rich physics emerge.

Is this where your work published in Nature Photonics on April 2nd comes into play?

AM: We thought it was a shame to be limited to a single recurrence, because many interesting things happen when we can observe at least two. We therefore had to find ways to limit the absorption of the vibrations in order to reach the second recurrence. To accomplish this, we added a second laser which amplified the first one to compensate for the losses. This type of amplification is already used in fiber optics to carry data over long distances. We distorted its initial purpose to resolve part of our problem. The other part was to succeed in observing the recurrence.

Whether it be a spring or an optical fiber, Fermi-Pasta-Ulam-Tsingou recurrence is common to all nonlinear system

Was the observation difficult to achieve?

AM: Compensating for energy losses was a crucial step, but it was pointless if we were not able to clearly observe what was happening in the fiber. To achieve this, we used the same impurities in the glass which absorbed the light signal. These impurities reflect a small part of the laser which circulates in the fiber. The returning light provides us with information on the development of the laser beam’s power as it spreads. This reflected part is then measured with another laser which is synchronous with the first, to assess the difference in phase between the two. This gives us additional information that allows us to clearly reveal the second recurrence for the first time in the world.

What did these observation techniques reveal about the second recurrence?

AM: We were able to conduct the first experimental demonstration involving what we call a break in symmetry. There is a recurrence for the data values of the initial energy sent into the system. But we also knew that theoretically, if we slightly changed these initial values that disturbed the system, there would be a shift in the distribution of energy during the second recurrence. The system did not repeat the same values. In our experiment, we managed to reverse the maximum and minimum energy level in the second recurrence compared to the first.

What perspectives does this observation create?

AM: From a fundamental perspective, the experimental observation of the theory predicting the break in symmetry is very interesting because it provides a confirmation. But in addition to this, the techniques we implemented to limit the absorption and observe what was occurring are very promising. We want to perfect them in order to go beyond second recurrence. If we succeed in reaching the equipartition point predicted by Fermi, Pasta, Ulam and Tsingou, we will then be able to observe a continuum of light. In very simple terms, this is the moment when we no longer see the lasers’ pulsations.

Does this fundamental work have applications?

AM: In terms of applications, our work allowed us to better understand how nonlinear systems develop. Yet these systems are often all around us. In nature for example, they are the basic ingredients for forming rogue waves, exceptionally high waves that can be observed in the ocean. With a better understanding of Fermi-Pasta-Ulam-Tsingou recurrence and the energy variations in nonlinear systems, we could better understand the mechanisms involved in shaping rogue waves and could better detect them. Nonlinear systems are also present in many optical tools. Modern LIDARs which use frequency combs, or “laser rulers’’ calculate the distances by sending a laser beam and then very precisely timing how long it takes for it to return—like a radar except with light. However, the lasers have nonlinear behavior: here again our work can help optimize the operation of new generation LIDARs, which could be used for navigating autonomous cars. Finally, calculations on the physical nonlinear systems are also involved in detecting exoplanets, thanks to their extreme precision.