carbon fibre

Recycling carbon fibre composites: a difficult task

Carbon fibre composite materials are increasingly widespread, and their use continues to rise every year. Recycling these materials remains difficult, but is nevertheless necessary at the European level for environmental, economic and legislative reasons. At IMT Mines Albi, researchers are working on a new method: vapo-thermolysis. While this process offers promising results, there are many steps to be taken before a recycling system can be developed. 

 

The new shining stars of aviation giants Airbus and Boeing, the A350 and the 787 Dreamliner, are also symbols of the growing prevalence of composite materials in our environment. Aircraft, along with wind turbines, cars and sports equipment, increasingly contain these materials. Carbon fibre composites still represent a minority of the composites on the market — far behind fiberglass — but are increasing by 10 to 15% per year. Manufacturers must now address the question of what will become of these materials when they reach the end of their lives? In today’s society, where considering the environmental impact of product is no longer optional, the recycling issue question cannot be ignored.

At IMT Mines Albi, scientific research being carried out by Yannick Soudais[1] and Gérard Bernhart[2] addresses this issue. The researchers in polymer and materials chemistry are developing a new process for recycling carbon fibre composites. This is no small task, since it requires separating the fibre present in the form of a textile or unidirectional filaments from the solid resin polymer that forms the matrix in which it is plunged.  Two main processes currently exist to try to separate the fiber from the resin: pyrolysis and solvolysis. The first consists of burning the matrix in an inert nitrogen atmosphere in order to avoid burning part of the fiber. The second is a chemical method based on solvents, which is very laborious, because it requires high temperature and pressure.

The process developed by the Albi-based researchers is called  “vapo-thermolysis” and combines these two processes. At present, it is one of the most promising solutions in the world to move toward the wide-scale reuse of carbon fibres. Besides Albi, only a handful of other research centers in the world are working on this topic (mainly in Japan, China and South Korea). “We use superheated water vapor which acts as a solvent and induces chemical degradation reactions,” explains Yannick Soudais.  Unlike with pyrolysis, there is no need for nitrogen. And compared to the traditional chemical method, the process takes place under atmospheric pressure. In short: vapo-thermolysis is easier to implement and master on an industrial scale.

After recovery, reuse

The simplest way to reuse carbon fibres is to spread out the bundle of interlinked fibres on a flat surface and reuse it in this form, as a carpet. They will therefore be used to make composites for decorative parts rather than structural parts. The size of the recovered fibres can also be further reduced to be used as reinforcements for polymer pellets. This approach makes it possible to produce automobile parts using injection, for example. Demonstrations illustrating this type of reuse have been carried out by the researchers in collaboration with the Toulouse-based company Alpha Recyclage Composites (ARC).

But the real challenge remains being able to reuse these fibers for higher-performance uses.  To do so, “we have to be able to make spun fibers from short fibres,” says Gérard Bernhart. “We’re carrying out extensive research on this topic in partnership with ARC because so far, no one in the world has been able to do that.” These prospects involve techniques specific to the textile industry, which is why the researchers have formed a partnership with the French Institute of Textiles and Clothing (IFTH). For now, the work is only in its exploratory stages and focuses on determining technologies which could be used to develop reshaping processes. One idea, for example, is to use ribbed rollers to form homogenous yarns, then a card to create a uniform voile, followed by a drawing and spinning stage.

For manufacturers of composite parts, these prospects open the door to more economically-competitive materials. Of course, recycling is an environmental issue and certain regulations establish standards of behavior for manufacturers. This is the case, for example, for automobile manufacturers, who must ensure, regardless of the parts used in their cars, that 85% of the vehicle mass can be recycled when it reaches the end of its life. But mature, efficient recycling processes also help lower the cost of manufacturing carbon fibre composite parts.

When the fibre is new it costs €25 per kilo, or even €80 per kilo for fibres produced for high-performance materials. “The price is mainly explained by the material and energy costs involved in fibre manufacturing,” says Gérard Bernhart. Recycled fibres would therefore lead to new industrial opportunities. Far from being unrelated to the environmental perspective, this economic aspect could, on the contrary, be a driving force for developing an effective system for recycling carbon fibres.

 

[1] Yannick Soudais is a researcher at the Rapsodee laboratory, a joint research unit between IMT Mines Albi/CNRS
[2] Gérard Bernhart is a researcher at the Clément Ader Institute, a joint research unit between IMT Mines Albi/ISAE/INSA Toulouse/University Toulouse III-Paul Sabatier/CNRS

 

cerveau, brain

Imaging to help people with brain injuries

People with brain injuries have complex cognitive and neurobiological processes. This is the case for people who have suffered a stroke, or who are in a minimally conscious state and close to a vegetative state. At IMT Mines Alès, Gérard Dray is working on new technology involving neuroimaging and statistical learning. This research means that we can improve how we observe patients’ brain activity. Ultimately, his studies could greatly help to rehabilitate trauma patients. 

 

As neuroimaging technology is becoming more effective, the brain is slowly losing its mystery; and as our ability to observe what is happening inside this organ becomes more accurate, we are opening up numerous possibilities, notably in medicine.  For several years, at IMT Mines Alès, Gérard Dray has been working on new tools to detect brain activity.  More precisely, he is aiming to improve how we record and understand the brain signals recorded by techniques such as electroencephalography (EEG) or infrared spectroscopy (NIRS). In partnership with the University of Montpellier’s research center EuroMov, and Montpellier and Nîmes University Hospitals, Dray is putting his research into application in order to support patients who have suffered heavy brain damage.

Read on I’MTech Technology that decrypts the way our brain works

This is notably the case of stroke victims; a part of their brain does not get enough blood supply from the circulatory system and therefore becomes necrotic. The neurons in this part of the brain die and the patient can lose certain motor functions in their legs or arms. However, this disability is not necessarily permanent. Appropriate rehabilitation can mean that stroke victims regain a part of their motor ability. “This is possible thanks to the plasticity of the brain, which allows the brain to move functions stored in the necrotic zone into a healthy part of the brain,” explains Gérard Dray.

Towards Post-Stroke Rehabilitation

In practice, this transfer happens thanks to rehabilitation sessions. Over several months, a stroke victim who has lost their motor skills is asked to imagine moving the part of their body that they are unable to move. In the first few sessions, a therapist guides the movement of the patient. The patient’s brain begins to associate the request for movement to the feeling of the limb moving; and gradually it recreates these neural connections in a healthy area of the brain. “These therapies are recent, less than 20 years old,” points out the researcher at IMT Mines Alès. However, although they have already proven that they work, they still have several limitations that Dray and his team are trying to overcome.

One of the problems with these therapies is the great uncertainty as to the patient’s involvement. When the therapist moves the limb of the victim and asks them to think about moving it, there is no guarantee that they are doing the exercise correctly. If the patient’s thoughts are not synchronized with their movement, then their rehabilitation will be much slower, and may even become ineffective in some cases. By using neuroimaging, researchers want to ensure that the patient is using their brain correctly and is not just being passive during a kinesiotherapy session. But the researchers want to go one step further. By knowing when the patient is thinking about lifting their arm or leg, it is possible to make a part of rehabilitation autonomous.

With our partners, we have developed a device that detects brain signals, and is connected to an automatic glove,” describes Gérard Dray. “When we detect that the patient is thinking about lifting their arm, the glove carries out the associated movement.” The researcher warns that this cannot and should not replace sessions with a therapist, as these are essential for the patient to understand the rehabilitation system.  However, the device allows the victim to complete the exercises in the sessions by themselves, which speeds up the transfer of brain functions towards a healthy zone.  Like after fracture, stroke patients will often have to go through physiotherapy sessions both at the hospital and at home by themselves.

Un gant connecté à un système de détection de l'activité du cerveau peut aider à la rééducation post-AVC.

A glove which is connected to a brain activity detection system can help post-stroke rehabilitation.

 

The main task for this device is being able to detect the brain signals associated with the movement of the limb.  When observing brain activity, the imaging tools record a constellation of signals associated with all the background activities managed by the brain. The neuronal signal which causes the arm to move gets lost in the crowd of background signals.  In order to isolate it, the researchers use statistical learning tools. The patients are first asked to carry out guided and supervised motor actions, while their neural activity is recorded. Then, they move freely during several sessions, while being monitored by EEG or NIRS technology. Once sufficient data has been collected, the algorithms can categorize the signals by action and can therefore deduce, through real-time neuroimaging, if the patient is in the process of trying to move their arm or not.

In partnership with Montpellier University Hospital, the first clinical trial with the device was carried out on 20 patients. The results were used to test the device. “Although the results are positive, we are still not completely satisfied with them,” admits Dray. “The algorithms only detected the patients’ intention to move their arm in 80% of cases. This means that in two out of ten times, the patient thinks about doing it without us being able to record their thoughts using neuroimaging.” To improve these detection rates, researchers are working on numerous algorithms which categorize brain activity.  “Notably, we are trying to couple imagery techniques with techniques that can detect fainter signals,” he continues.

Detecting Consciousness After Head Trauma

The improvement in these brain activity detection tools is not only useful for post-stroke rehabilitation. The IMT Mines Alès team uses the technology that they have developed on people who have suffered head trauma and whose state of consciousness has been altered. After an accident, a victim who is not responsive, but whose respiratory and circulatory functions are in good condition, can be in several different states. They can be in either a total and normal state of consciousness, a coma, a vegetative state, a state of minimal consciousness, or have locked-in syndrome. “These different states are characterized by two factors, consciousness and being awake,” summarizes Dray. In a normal state, we are both awake and conscious. However, a person who is in a coma is neither awake nor conscious. A person is a vegetative state is awake but not conscious of their surroundings.

According to their different states, patients receive different types of care and have different prospects of recovery. The huge difficulty for doctors is being able to identify patients who are awake without being responsive, but whose state of consciousness is not yet gone. “With these people there is a hope that their state of consciousness will be able to return to normal,” explains the researcher. However, the patient’s state of consciousness is sometimes very weak, and we have to detect it using high-quality neuroimaging tools. For this, Gérard Dray and his team use EEG paired with sound stimuli. He explains the process. “We speak to the person and explain to them that we are going to play them a series of signals which have deep frequencies, in between these signals there will be high-pitched frequencies. We ask them to count the high-pitched frequencies. Their brain will react to each sound. However, when they are played a high-pitched sound, their cognitive response will be more important, as these are the signals which the brain will remember. More precisely, a wave called P300 is generated when we are innervated. In the case of the high-pitched sounds, the patient’s brain will generate this wave in an important way.”

Temporal monitoring of brain activity after an auditory stimulus using an EEG device.

 

The patients who still have a state of consciousness will produce a normal EEG in response to the exercise, despite not being able to communicate or move. However, a victim who is in a vegetative state will not respond to the stimuli. The results of these first clinical trials carried out on patients who had experienced head trauma are currently being analyzed. The first bits of feedback are promising for the researchers, who have already managed to detect differences in P300 wave generation. “Our work is only just beginning,” states Gérard Dray. “In 2015, we started our research on the detection of consciousness, and it’s a very recent field.” With increasing progress in neuroimaging techniques and learning tools, this is an entire field of neurology that is about to undergo major advances.

 

catenary catenaries

A mini revolution in railway catenaries

The decade-long ACCUM project carried out by SNCF, Stratiforme Industries, the Valenciennes Railway Testing Center and IMT Lille Douai has led to the development of a new catenary cantilever system for railways. This advance represents a major change in this field, where equipment has seen little change over the past 50 years.

 

When asked to draw a train on a railway track, odds are that most people would not think to include the poles along the railway. Yet these vertical support structures, placed every 20 to 60 meters along electrified railway lines, are essential for supporting the overhead lines that power the trains. The portion located at their peak, which is responsible for holding the wires, plays an especially crucial role. It is composed of a cantilever system, which must at once support the contact wire at a constant height with centimeter precision (for high-speed lines), and withstand significant mechanical constraints arising from the voltages applied to the wires, while ensuring the electric insulation between the wire and the pole.

A traditional catenary cantilever is made up of numerous parts which take a long time to assemble.

 

The catenary cantilever system is an extremely sensitive piece of equipment which has remained virtually unchanged for a half a century. Composed of some hundred parts assembled to form a triangular metal tube, current cantilevers are a real puzzle to assemble and adjust. “Since the stresses are triangulated on the structure, when an adjustment is made at one location within the system, everything is shifted, and it all has to be adjusted again,” explains Patrice Hulot. An engineer at Lille Douai, he contributes to the ACCUM[1] project, which aims to simplify catenary cantilevers.

Les armements caténaires ACCUM sont en composites, et bien plus simples à monter pour les opérateurs.

The ACCUM catenary cantilevers are made from composites, and are much easier for operators to assemble.

 

This modernization project has been carried out over the last ten years by the SNCF and Stratiforme, a company that specializes in composite materials. In 2019, it culminated in the installation of 50 prototypes on test lines at the Railway Testing Centre (CEF), followed by installations on commercial lines. It represents a revolution for SNCF lines, and offers catenary operators the first in-depth modification of this system in fifty years.

A universal catenary cantilever system

What sets apart the new cantilevers developed through the ACCUM project is that they are composed of a limited number of parts to assemble on site. “Everything is delivered 80% pre-assembled,” says David Cnockaert, project manager at Stratiforme. “And these final components to be assembled make it possible to cover all the different types of post configurations and railways.” Furthermore, these new fittings can be used for 1,500 volt and 25,000 volt lines alike. The flexibility of this system allows it to be described as universal, since it can be adapted to all types of electrification, hence its name –ACCUM, the French acronym for “universal multi-voltage composite catenary cantilever.”

The first systems to be installed demonstrated how easy they are to assemble compared to the previous systems, with the major benefit being the time needed for adjustment and fine-tuning, which represents up to 50% of the total time needed to install or replace cantilevers. This therefore makes it possible to reduce the time required to set up the cantilevers,  significantly increasing the availability of railways during renovation work while reducing installation costs and the duration of work to put in new lines. These results are even more satisfactory since they are only the initial results. “Operators had decades of experience to optimize the installation of the old systems, so the installation of the new cantilevers will clearly take less time in the months and years to come,” says Patrice Hulot.

This innovation has received praise within the industry. The project was rewarded at JEC World in March — the global composite materials show — by an Innovation Award. It should be noted that while, for the moment, the new catenary cantilever has only been implemented on the French railways, this mini revolution in railway equipment has the potential for international success. The SNCF is a global leader in the high speed rail sector, in terms of both rolling stock and infrastructure. Its competitors therefore closely monitor developments in the field. This means that Japan or North Africa could soon be added to the list of future markets for the universal composite catenary cantilever.

 

[1] FUI ACCUM project, co-funded by BPI France and the Hauts-de-France region and accredited by the i-TRANS competitiveness cluster. Project leader: Stratiforme. Partners: IMT Lille Douai, ARMINES, Railway Testing Center (CEF) and the SNCF Network.

 

Civiq

CiViQ: working towards implementing quantum communications on our networks

Projets européens H2020End 2018, the CiViQ H2020 European project was launched for a period of three years. The project aims to integrate quantum communication technologies into traditional telecommunication networks. This scientific challenge calls upon Télécom Paris’ dual expertise in both quantum cryptography and optical telecommunication, and will provide more security for communications. Romain Alléaume, a researcher in quantum information, is a member of CiViQ. He explained to us the challenges and context of the project.

 

What is the main objective of the CiViQ project?

Romain Alléaume: The main objective of the project is to make quantum communications technologies and, in particular, consistent quantum communications, much better adapted for use on a fiber-optic communications system. To do this, we want to improve the integration, miniaturization, and interoperability of these quantum communication technologies.

Why do you want to integrate quantum communications into telecommunication networks?

RA: Quantum communications are particularly resistant to interception because they are typically based on the exchange of light pulses containing very few photons.  On such a minuscule scale, any attempt to eavesdrop on the communications and therefore measure them will come up against the fundamental principles of quantum physics. These principles guarantee that the system will disrupt communications sufficiently for the spy to be detected.

Based on this idea, it is possible to develop protocols called Quantum Key Distribution, or QKD. These protocols allow a secret encryption key to be shared with the help of quantum communication.  Unlike in mathematical cryptography, a key exchange through QKD cannot be recorded and therefore cannot be deciphered later on. Thus, QKD offers what is called “everlasting security”. This means that the security will last no matter what the calculating power of the potential attacker.

What will this project mean for the implementation of quantum communications in Europe?

RA: The European Community has launched a large program dedicated to quantum technologies which will run for 10years, called the Quantum Technology Flagship. The aim of the flagship is to accelerate technological development and convert research in these fields into technological innovation.  The CiViQ project is one of the projects chosen for the first stage of this program.  For the first time in a quantum communications project, several telecommunications operators are also taking part: Orange, Deutsche Telekom and Telefonica. So it is an extensive project in the technological development of coherent quantum communications, with research ranging from cointegration with classic forms of communication, to photonic integration. Although CiViQ has to allow the implementation of quantum cryptography on a very large scale, it must also outline the prospects for a universal use of communications. This reinforces security of critical infrastructures by relying on the networks’ physical layer.

What are the technological and scientific challenges which you face?

RA: One of the biggest challenges we face is merging classical optical communications and quantum communications.  In particular, we must work on implementing them jointly on the same optical fiber, using similar, if not identical, equipment.  To do that, we are calling on Télécom ParisTech’s diverse expertise.  I am working with Cédric Ware and Yves Jaouen, specialists in optical telecommunications.   The collaboration allows us to combine our expertise in quantum cryptography and optic networks.  We use a state-of-the-art experimental platform to study classical-quantum conversion in optic communications.

More broadly, how does the project reflect the work of other European projects that you are carrying out in quantum communications?

RA: As well as CiViQ, we are taking part in the OpenQKD project. This is also part of the Quantum Technology Flagship.  The project involves pilot implementations of QKD, with the prospect of Europe developing a quantum communications infrastructure within 10 to 15 years’ time. I am also part of a standardization activity in quantum cryptography, working with the ETSI QKS-Industry Standardization Group. With them, I mainly work on issues such as the cryptographic assessment and certification of QKD technology.

How long have you been involved in developing these technologies?

RA: Télécom Paris has been involved in European research in quantum cryptography and communications for 15 years. In particular, this was through implementing the first European network as part of the SECOQC project, which ran from 2004-2018. We have also taken part in the FP7 Q-CERT project, which focuses on the security of implementing quantum cryptography. More recently, the school has partnered with the Q-CALL H2020 project which focuses on the industrial development of quantum communications. As well as this, the project is working on a possible “quantum internet” in the future. This relies on using quantum communications from start to finish, which is made possible by the increase in the reliability of quantum memories.

In parallel, my colleagues who specialize in optic telecommunications have been developing world-class expertise in coherent optical communications for around a decade.  With this type of communications, CiViQ aims to integrate quantum communications, by relying on the fact that the two techniques are based on the same common signal processing techniques.

What will be the outcomes of the CiViQ project?

RA: We predict that there will be key contributions made to experimental laboratory demonstration of the convergence of quantum and classical communications, with a level of integration that has not yet been achieved.  A collaboration with Orange is also planned, which will involve issues regarding wavelength-division multiplexing. The technology will then be demonstrated between the future Télécom Paris site in Palaiseau, and Orange Labs in Châtillon.

Finally, we predict theoretical contributions to new quantum cryptography protocols, techniques involving proofs of security and the certification of QKD technology, which will have an impact on standardization.

nuclear

Nuclear: a multitude of scenarios to help imagine the future of the industry

Article written in partnership with The Conversation.
By Stéphanie Tillement and Nicolas Thiolliere, IMT Atlantique.

[divider style=”normal” top=”20″ bottom=”20″]

[dropcap]N[/dropcap]uclear energy plays a very important role in France – where 75 % of the country’s electricity is produced using this energy – and raises crucial questions concerning both its role in the future electricity mix and methods for managing the associated radioactive materials and waste.

But the discussions held as part of the recent Multiannual Energy Plan (PPE) gave little consideration to these issues.  The public debate on radioactive material and waste management set to launch on Wednesday 17 April may provide an opportunity to delve deeper into these issues.

Fifty-eight pressurized water reactors (PWR), referred to as “second generation,” are currently in operation in France and are responsible for producing the majority of the country’s electricity. Nineteen of these reactors, were put into operation before 1981 and will reach their design service life of 40 years over the next three years.

The future of the nuclear industry represents a crucial question, which will likely have a lasting effect on all industry stakeholders  – electricity producers, distribution system operators, energy providers and consumers. This means that all French citizens will be affected.

Imagining the future of the nuclear industry

Investment decisions regarding the electricity sector can establish commitments for the country that will last tens, or even hundreds of years, and this future clearly remains uncertain. Against this backdrop, forward-looking approaches can help plan for the future and identify, even partially, the possible consequences of the choices we make today.

Such an approach involves first identifying then analyzing the different possible paths for the future in order to asses them and possibly rank them.

The future of the nuclear industry includes a relatively wide range of possibilities: it varies according to the evolution of installed capacity and the pace with which new technologies (EPR technology, referred to as “third generation” or RNR technology, referred to as “fourth generation”) are deployed.

Given the great degree of uncertainty surrounding the future of the nuclear industry, research relies on simulation tools;  the “electronuclear scenario” represents one of the main methods. Little-known by the general public, it differs from the energy scenarios used to inform discussions for the Multiannual Energy Plan (PPE). The nuclear scenario represents a basic building block of the energy scenario and is based on a detailed description of the nuclear facilities and the physics that controls them. In practice, energy and nuclear scenarios can complement one another, with the outcomes of the former representing hypotheses for the latter, and the results of the latter making it possible to analyze in greater detail the different paths set out by the former.

The aim of studying the nuclear scenario is to analyze one or several development paths for nuclear facilities from a materials balance perspective, meaning tracking the evolution of radioactive materials (uranium, plutonium, fission products etc.) in nuclear power plants. In general, it relies on a complex modeling tool that manages a range of scales, both spatial (from elementary particle to nuclear power plants) and temporal (from less than a microsecond for certain nuclear reactions to millions of years for certain types of nuclear waste).

Based on a precise definition of a power plant and its evolution over time, the simulation code calculates the evolution of the mass of each element of interest, radioactive or otherwise, across all nuclear facilities. This data can then serve as the basis for producing more useful data concerning the management of resources and recycled materials, radiation protection etc.

Emergence of new players

Long reserved to nuclear institutions and operators, the scenario-building process has gradually opened up to academic researchers, driven largely by the Bataille Law of 1991 and the Birraux Law of 2006 concerning radioactive waste management. These laws resulted in a greater diversity of players involved in producing, assessing and using scenarios.

In addition to the traditional players (EDF and CEA in particular), the CNRS and academic researchers (primarily physicists and more recently economists) and representatives of civil society have taken on these issues by producing their own scenarios.

There have been significant developments on the user side as well. Whereas prior to the Bataille and Birraux Laws, nuclear issues were debated almost exclusively between nuclear operators and the executive branch of the French government, giving rise to the image of issues confined to “ministerial secrecy,” these laws have allowed for these issues to be addressed in more public and open forums, in particular in the academic and legislative spheres.

They also created National Assessment Committees, composed of twelve members selected based on proposals by the Académie des Sciences, the Académie des Sciences Morales et Politiques, and the  French Parliamentary Office for the Evaluation of Scientific and Technological Choices. The studies of scenarios produced by institutional, industrial and academic players are assessed by these committees and outlined in annual public reports sent to the members of the French parliament.

Opening up this process to a wider range of players has had an impact on the scenario-building practices, as it has led to a greater diversity of scenarios and hypotheses on which they are based.

A variety of scenarios

The majority of the scenarios developed by nuclear institutions and industry players are “realistic” proposals according to these same parties: scenarios based on feedback from the nuclear industry.  They rely on technology already developed or in use and draw primarily on hypotheses supporting the continued use of nuclear energy, with an unchanged installed capacity.

The scenarios proposed by the research world tend to give less consideration to the obligation of “industrial realism,” and explore futures that disrupt the current system. Examples include research carried out on transmutation in ADS (accelerator-driven reactors), design studies for MSR (molten salt reactors), which are sometimes described as “exotic” reactors,  and studies on the thorium cycle. A recent study also analyzed the impact of recycling the plutonium in reactors of the current technology, and as part of a plan to significantly reduce, or even eliminate, the portion of nuclear energy by 2050.

These examples show that academic scenarios are often developed with the aim of deconstructing the dominant discourse in order to foster debate.

Electronuclear scenarios clearly act as “boundary objects.” They provide an opportunity to bring together different communities of stakeholders, with various knowledge and different, and sometimes opposing, interests in order to compare their visions for the future, organize their strategies and even cooperate. As such, they help widen the “scope of possibilities” and foster innovation through the greater diversity of scenarios produced.

Given the inherent uncertainties of the nuclear world, this diversity also appears to be a key to ensuring more robust and reliable scenarios, since discussing these scenarios forces stakeholders to justify the hypotheses, tools and criteria used to produce them, which are often still implicit.

Debating scenarios

However, determining how these various scenarios can be used to support “informed” decisions remains controversial.

The complexity of the system to be modeled requires simplifications, thus giving rise to biases which are difficult to quantify in the output data. These biases affect both technical and economic data and are often rightly used to dispute the results of scenarios and the recommendations they may support.

How, then, can we ensure that the scenarios produced are robust? There are two opposing strategies:  Should we try to build simple or simplified scenarios in an attempt to make them understandable to the general public (especially politicians), at the risk of neglecting important variables and leading to “biased” decisions? Or, should we produce scenarios that are complex, but more loyal to the processes and uncertainties involved, at the risk of making them largely “opaque” to decision-makers, and more broadly, to the citizens invited to take part in the public debate?

As of today, these scenarios are too-little debated outside of expert circles. But let us hope that the public debate on radioactive waste management will provide an excellent opportunity to bring these issues to a greater extent into the “scope of democracy,” in the words of Christian Bataille.

[divider style=”dotted” top=”20″ bottom=”20″]

Stéphanie Tillement, Sociologist, IMT Atlantique – Institut Mines-Télécom and Nicolas Thiolliere, Associate Professor in reactor physics, IMT Atlantique – Institut Mines-Télécom

This article has been republished from The Conversation under a Creative Commons license. Read the original article (in French).

XENON1T

XENON1T observes one of the rarest events in the universe

The researchers working on the XENON1T project observed a strange phenomenon: the simultaneous capture of two electrons by the atomic nucleus of xenon. A phenomenon so rare that it earned the scientific collaboration, which includes the Subatech[1] laboratory, a spot on the cover of the prestigious journal Nature on 25 April 2019. It was both the longest and rarest phenomenon ever to be directly measured in the universe.  Although the research team considers this observation — the first in the world — to be a success, it was not their primary aim. Dominque Thers, researcher at IMT Atlantique and head of the French portion of the XENON1T team, explains below. 

 

What is this phenomenon of a simultaneous capture of two electrons by an atomic nucleus?

Dominique Thers: In an atom, it’s possible for an electron [a negative charge] orbiting a nucleus to be captured by it. Inside the nucleus, a proton [a positive charge] will thus become a neutron [a neutral charge]. This is a known phenomenon and has already been observed. However, the theory prohibits this phenomenon for certain atomic elements. This is the case for isotope 124 of xenon, whose nucleus cannot capture a single electron. The only event allowed by the laws of physics for this isotope of xenon is the capture of two electrons at the same time, which neutralize two protons, thereby producing two neutrons in the nucleus. The Xenon 124 therefore becomes tellurium 124, another element. It’s this simultaneous double capture phenomenon that we observed, which had never been observed before.

XENON1T was initially designed to search for WIMPs, the particles that make up the mysterious dark matter of the universe. How do you go from this objective to observing the atomic phenomenon, as you were able to do?

DT: In order to observe WIMPs, our strategy is based on the exposure of two tons of liquid xenon. This xenon contains different isotopes, of which approximately 0.15% are xenon 124. That may seem like a small amount, but for two tons of liquid it represents a large number of atoms.  So there is a chance that this simultaneous double capture event will occur. When it does, the cloud of electrons around the nucleus reorganizes itself, and simultaneously emits X-rays and a specific kind of electrons known as Auger electrons. Both of these interact with the xenon and produce light using the same mechanism with which the WIMPs in dark matter react with xenon. Using the same measuring instrument as the one designed to detect WIMPs, we can therefore observe this simultaneous double capture mechanism. And it’s the energy signature of the event we measure that gives us information about the nature of the event.  In this case, the energy released was approximately twice the energy required to bind an electron to its nucleus, which is characteristic of a double capture.

To understand how the XENON1T detector works, read our dedicated article: XENON1T: A giant dark matter hunter

Were you expecting to observe these events?

DT: We did not at all build XENON1T to observe these events. However, we have a cross-cutting research approach: we knew there was a chance that the double capture would occur, and that we may be able to detect it if it did. We also knew that the community that studies atom stability and this type of phenomenon hoped to observe such an event to consolidate their theories. Several other experiments around the world are working on this. What’s funny is that one of these experiments, XMASS, located in Japan, had published a theory ruling out such an observation over a much longer period of time than what we observed. In other words, according to the previous findings of their research on double electron capture,  we weren’t supposed to observe the phenomenon with the parameters of our experiment. In reality, after re-evaluation, they were just unlucky, and could have observed it before we did with similar parameters.

One of the main characteristics of this observation, which makes it especially important, is its half-life time. Why is this?

DT : The half-life time measured is 1.8×1022 years, which corresponds to 1,000 billion times the age of the universe. To put it simply, within a sample of xenon 124, it takes billions of billions of years before this decay occurs for half of the atoms. So it’s an extremely rare process. It’s the phenomenon with the longest half-live ever directly observed in the universe; half-life times longer than that have only been deduced indirectly. What’s important to understand, behind all this information, is that successfully observing such a rare event means that we understand the matter that surrounds us very well.  We wouldn’t have been able to detect this double capture if we hadn’t understood our environment with such precision.

Beyond this discovery, how does this contribute to your search for dark matter?

DT: The key added-value of this result is that it reassures us about the analysis we’re carrying out. It’s challenging to search for dark matter without ever achieving positive results. We’re often confronted with doubt, so seeing that we have a positive signal with a specific signature that has never been observed until now is reassuring. This proves that we’re breaking new ground, and encourages us to remain motivated. It also proves the usefulness of our instrument and attests to the quality and precision of our calibration campaigns.

Read on I’MTech: Even without dark matter Xenon1T is a success

What’s next for XENON1T?

DT: We still have an observation campaign to analyze since our approach is to let the experiment  run for several months without human interference, then to discover and analyze the measurements to look for results. We improved the experiment in the beginning of 2019 to further increase the sensitivity of our detector. XENON1T initially contained a ton of xenon, which explains its name. At present, it has more than double that amount and by the end of the experiment, it will be called XENONnT and will contain 8 tons of xenon. This will allow us to obtain a sensitivity limit of detection for WIMPs which is ten times lower, in the hope of finally detecting these dark matter particles.

[1] The Subatech laboratory is a joint research unit between IMT Atlantique/CNRS/University of Nantes.

 

cements

In search of forgotten cements

Out of the 4 billion tons of cement produced every year, the overwhelming majority is Portland cement.  Invented over 200 years ago in France by Louis Vicat — then patented by Englishman Joseph Aspdin —Portland is a star in the world of building materials. Its almost unparalleled durability has allowed it to outperform its competitors, so much so that the synthesis methods and other cement formulations used in the 19th and early 20th centuries have since been forgotten. Yet, buildings constructed with these cements still stand today, and cannot be restored using Portland, which has become a monopolistic cement. In a quest to retrieve this lost technical expertise, Vincent Thiéry, a researcher at IMT Lille Douai, has launched the Cassis[1] project. In the following interview, he presents his research at the border between history and materials.

 

How can we explain that the cement industry is now dominated by a single product: Portland cement?

Vincent Thiery: Cement as we know it today was invented in 1817 by a young bridge engineer:  Louis Vicat. He needed a material that had high mechanical strength, which would set and have strong durability under water, in order to build the Souillac bridge over the Dordogne river. He therefore developed a cement based on limestone and clay fired at 1,500°C, which would later be patented by an Englishman, Joseph Aspdin, under the name Portland Cement in 1824. The performance of Portland cement gradually made it the leading cement. In 1856, the first French industrial cement plant to produce Portland cement opened in Boulogne-sur-Mer. By the early 20th century, the global market was already dominated by this cement.

What has become of the other cements that coexisted with Portland cement between its invention and its becoming the sole standard cement?

VT: Some of these other cements still exist today. One such example is Prompt cement, which is also called Roman cement — its ochre color reminded its inventor of Roman buildings. It’s an aesthetic restoration cement invented in 1796 by Englishman James Parker. It’s starting to gain popularity again today since it emits less CO2 into the atmosphere and can be mixed with plant fibers to make more environmentally-friendly cements. But it’s one of the few cements that still exist along with Portland cement. The majority of the other cements stopped being produced altogether as of the late 19th or early 20th centuries.

Have these cements always had the same formulation as they do today?

VT: No, they evolved over the course of the second half of the 19th century and were gradually modified. For example, the earliest Portland cements, known as “meso-Portland cements,”  were rich in aluminum. There was a wider range of components at that time than there is today.  These cements can still be found across France, in railway structures or old abandoned bridges.  Although they are there, right before our eyes, these cements are little-known. We don’t know their exact formulation or the processes used to produce them. This is the aim of the CASSIS project – to recover this knowledge of old cements. The Boulogne-sur-Mer region in which we’ll be working should provide us with many examples of buildings made with these old cements, since it was where cement started its industrial rise in France.  In the Marseille region, for example, research similar to that which we plan to conduct was carried out by the concrete division of the French Historic Monument Research Laboratory (LRMH), one of our partners for the CASSIS project. This research helped trace the history of many “local” cements.

How do you go about finding these cements?

VT: The LRMH laboratory is part of the Ministry of Culture. It directly contacts town halls and private individuals who own buildings known to be made with old concretes. This work combines history and archeology since we also look for archival documents to provide information about the old buildings, then we visit the site to make observations. Certain advertising documents from the period, which boast about the structures, can be very helpful. In the 1920s, for example, cement manufacturer Lafarge (which has since become LafargeHolcim) published a catalogue describing the uses of some of its cements, supported by photos and recommendations.

Once a structure has been identified as incorporating forgotten cements, how do you go back in time to deduce the composition and processes used?

VT: It ‘s a matter of studying the microstructure of the material.  We set up an array of analyses using rather conventional techniques in the field of mineralogy: optical or scanning electron microscope, Raman spectroscopy, X-ray diffraction etc. This allows us to detect mineralogical changes that appear during the firing of clay and limestone. This study provides us with a great deal of information: how the material was fired, at what temperature and for how long, as well as whether the clay and limestone were ground finely before firing. Certain characteristics of the microstructure can only be observed if the temperature has exceeded a certain level, or if the cement was fired very quickly. As part of the CASSIS project, we’ll also be using nuclear magnetic resonance since the hydrates— which form when the cement sets — are poorly crystallized.

It is this, along with other types of microstructural evidence within a cement paste, whether mortar or concrete, that makes it possible to gain valuable insight into the nature of the cement used. It is a relic of a speck of non-hydrated cement in a mortar from the early 1880s. To make these observations, samples are prepared in thin sections (30 micrometers thick) in order to be studied with an optical microscope. The compilation of observations and analyses of these samples provides information about the nature of the raw mix (the mix before firing) used to make the cement, its fuel and firing conditions:  the same approach will be used for the Cassis project.

 

Do you have a way of verifying if your deductions are correct?

VT: Once we’ve deduced the possible scenarios for the mix and process used to obtain a cement, we plan to carry out tests to verify our hypotheses.  For the project, we will try to resynthesize the forgotten cements using the information we have identified. We even hope to equip ourselves with a vertical cast-iron kiln to reproduce firing from the period, which is marked by the irregularities of the firing conditions in the kiln. By comparing the cement obtained through these experiments with the cement in the structures we’ve identified, we can verify our hypotheses.

Why are you trying to recover the composition of these old cements? What is the point of this work since Portland cement is considered to be the best?  

VT: First of all, there’s the historical value: this research allows us to recover a forgotten technical culture. We don’t know much about the shift from Roman cement to Portland cement in the industry of the period. By studying the other cements that existed at the time of this shift, we may be able to better understand how builders gradually transitioned from one to the other. Furthermore, this research may be of interest to current industry players. Restoration work on structures built with forgotten cements is no easy matter: the new cement to be applied must first be found to be compatible with the old cement to ensure strong durability. So from a cultural heritage perspective, it’s important to be able to produce small quantities of cement adapted to specific restoration work.

 

[1] The CASSIS project is funded by the I-SITE Foundation (Initiatives-Science – Innovation –Territories– Economy) at the University of Lille Nord Europe. It brings together IMT Lille Douai, the French Historic Monument Research Laboratory (LRMH) of the Ministry of Culture, Centrale Lille, the Polytechnic University of Hauts-de-France, and the technical association of the hydraulic binders industry.

 

digital identity

The ethical challenges of digital identity

Article written in partnership with The Conversation.
By Armen Khatchatourov and Pierre-Antoine ChardelInstitut Mines-Télécom Business School

[divider style=”normal” top=”20″ bottom=”20″]

[dropcap]T[/dropcap]he GDPR recently came into effect, confirming Europe’s role as an example in personal data protection. However, we must not let it dissuade us from examining issues of identity, which have been redefined in this digital era. This means thinking critically about major ethical and philosophical issues that go beyond the simple question of the protection of personal information and privacy.

Current data protection policy places an emphasis on the rights of the individual. But it does not assess the way in which our free will is increasingly restricted in ever more technologically complex environments, and even less the effects of the digital metamorphosis on the process of subjectification, or the individual’s self-becoming. In these texts, more often than not, we consider the subject as already constituted, capable of exercising their rights, with their own free will and principles. And yet, the characteristic of digital technology, as proposed here, is that it contributes to creating a new form of subjectivity: constantly redistributing the parameters of constraints and incitation, creating the conditions for increased individual malleability. We outline this process in the work Les identités numériques en tension (Digital Identities in Tension), written under the Values and Policies of Personal Information Chair at IMT.

The resources established by the GDPR are clearly necessary in supporting individual initiative and autonomy in managing our digital lives. Nonetheless, the very notions of the user’s consent and control over their data on which the current movement is based are problematic. This is because there are two ways of thinking, which are distinct, yet consistent with one another.

New visibility for individuals

Internet users seem to be becoming more aware of the traces they leave, willingly or not, during their online activity (connection metadata, for example). This may serve as support for the consent-based approach. However, this dynamic has its limits.

Firstly, the growing volume of information collected makes the notion of systematic user consent and control unrealistic, if only due to the cognitive overload it would induce. Also, changes in the nature of technical collection methods, as demonstrated by the advent of connected objects, has led to the increase of sensors collecting data even without the user realizing. The example of video surveillance combined with facial recognition is no longer a mere hypothesis, along with the knowledge operators acquire from these data. This is a sort of layer of digital identity whose content and various possible uses are entirely unknown to the person it is sourced from.

What is more, there is a strong tendency for actors, both from the government and the private sector, to want to create a full, exhaustive description of the individual, to the point of reducing them to a long list of attributes. Under this new power regime, what is visible is reduced to what can be recorded as data, the provision of human beings as though they were simple objects.

Vidéo de surveillance. Mike Mozart/Wikipedia, CC BY

Surveillance video. Mike Mozart/Wikipedia, CC BY.

 

The ambiguity of control

The second approach at play in our ultra-modern societies concerns the application of this paradigm based on protection and consent within the mechanisms of a neo-liberal society. Contemporary society combines two aspects of privacy: considering the individual as permanently visible, and as individually responsible for what can be seen about them. This set of social standards is reinforced each time the user gives (or opposes) consent to the use of their personal information. At each iteration, the user reinforces their vision of themselves as the author and person responsible for the circulation of their data. They also assume control over their data, even though this is no more than an illusion. They especially assume responsibility for calculating the benefits that sharing data can bring. In this sense, the increasing and strict application of the paradigm of consent may be correlated with the perception of the individual becoming more than just the object of almost total visibility. They also become a rational economic agent, capable of analyzing their own actions in terms of costs and benefits.

This fundamental difficulty means that the future challenges for digital identities imply more than just providing for more explicit control or more enlightened consent. Complementary approaches are needed, likely related to users’ practices (not simply their “uses”), on the condition that such practices bring about resistance strategies for circumventing the need for absolute visibility and definition of the individual as a rational economic agent.

Such digital practices should encourage us to look beyond our understanding of social exchange, whether digital or otherwise, under the regime of calculating potential benefits or external factors. In this way, the challenges of digital identities far outweigh the challenges of protecting individuals or those of “business models”, instead affecting the very way in which society as a whole understands social exchange. With this outlook, we must confront the inherent ambivalence and tension of digital technologies by looking at the new forms of subjectification involved in these operations.  A more responsible form of data governance may arise from such an analytical exercise.

[divider style=”normal” top=”20″ bottom=”20″]

Armen Khatchatourov, Lecturer-Researcher, Institut Mines-Télécom Business School and Pierre-Antoine Chardel, Professor of social science and ethics, Institut Mines-Télécom Business School

This article has been republished from The Conversation under a Creative Commons license. Read the original article here.

 

Russian internet

Digital sovereignty: can the Russian Internet cut itself off from the rest of the world?

This article was originally published in French in The Conversation, an international collaborative news website of scientific expertise, of which IMT is a partner. 

Article written by Benjamin Loveluck (Télécom ParisTech), Francesca Musiani (Sorbonne Université), Françoise Daucé (EHESS), and Ksenia Ermoshina (CNRS).

[divider style=”normal” top=”20″ bottom=”20″]

[dropcap]T[/dropcap]he Internet infrastructure is based on the principle of the internationalization of equipment and data and information flows. Elements of the Internet with a geographic location in national territories need physical and information resources hosted in other territories to be able to function. However, in this globalized context, Russia has been working since 2012 to gradually increase national controls on information flows and infrastructure, in an atmosphere of growing political mistrust towards protest movements within the country and its international partners abroad. Several laws have already been passed in this regard, such as the one in force since 2016 requiring companies processing data from Russian citizens to store them on national territory, or the one regulating the use of virtual private networks (VPNs), proxies and anonymization tools in force since 2017.

In February 2019, a bill titled “On the isolation of the Russian segment of the Internet” was adopted at first reading in the State Duma (334 votes for and 47 against) on the initiative of Senators Klichas and Bokova and Deputy Lugovoi. The accompanying memo of intent states that the text is a response to the “aggressive nature of the United States National Cybersecurity Strategy” adopted in September 2018. The project focuses on two main areas: domain name system control (DNS, the Internet addressing system) and traffic routing, the mechanism that selects paths in the Internet network for data to be sent from a sender to one or more recipients.

Russia wants to free itself from foreign constraints

The recommendations notably include two key measures. The first is the creation by Russia of its own version of the DNS in order to be able to operate if links to servers located abroad are broken, since none of the twelve entities currently responsible for the DNS root servers are located on Russian territory. The second is for Internet Service Providers (ISPs) to demonstrate that they are able to direct information flows exclusively to government-controlled routing points, which should filter traffic so that only data exchanged between Russians reaches its destination.

This legislation is the cornerstone of the Russian government’s efforts to promote their “digital sovereignty”. According to Russian legislators, the goal is to develop a way of isolating the Russian Internet on demand, making it possible to respond to the actions of foreign powers with self-sufficiency and to guarantee continued functioning. On the other hand, this type of configuration would also facilitate the possibility of blocking all or part of communications.

The Russian state is obviously not the only one aiming for better control of the network. Iran has been trying to do the same thing for years, as has China with the famous Great Firewall of China. Many states are seeking to reinforce their authority over “their” Internet, to the point of partially or totally cutting off the network (measures known as “shutdowns” or “kill switches”) in some cases. This was the case in Egypt during the 2011 revolution as well as more recently in Congo during the elections. It is also regularly the case in some parts of India.

In connection with these legislative projects, a recent initiative, published on February 12 by the Russian news agency TASS, has attracted particular attention. Under the impetus of the Russian State, a group uniting the main public and private telecommunications operators (led by Natalya Kasperskaya, co-founder of the well-known security company Kaspersky), has decided to conduct a test in order to temporarily cut off the Russian Internet from the rest of the globalized network and in particular the World Wide Web. This will in principle happen before April 1, the deadline for amendments to the draft law, requiring Russian internet providers to be able to guarantee their ability to operate autonomously from the rest of the network.

Technical, economic and political implications

However, beyond the symbolic significance of empowerment through the disconnection of such a major country, there are many technical, economic, social and political reasons why such attempts should not be made, for the sake of the Internet on both an international and national scale.

From a technical point of view, even if Russia tries to prepare as much as possible for this disconnection, there will inevitably be unanticipated effects if it seeks to separate itself from the rest of the global network, due to the degree of interdependence of the latter across national borders and at all protocol levels. It should be noted that, unlike China which has designed its network with a very specific project of centralized internal governance, Russia has more than 3,000 ISPs and a complex branched-out infrastructure with multiple physical and economic connections with foreign countries. In this context, it is very difficult for ISPs and other Internet operators to know exactly how and to what extent they depend on other infrastructure components (traffic exchange points, content distribution networks, data centers etc.) located beyond their borders. This could lead to serious problems, not only for Russia itself but also for the rest of the world.

In particular, the test could pose difficulties for other countries that route traffic through Russia and its infrastructure, something which is difficult to define. The effects of the test will certainly be sufficiently studied and anticipated to prevent the occurrence of a real disaster like a long-term compromise of the functioning of major infrastructures such as transport. More likely consequences are the malfunctioning or slowdown of websites frequently used by the average user. Most of these websites operate from multiple servers located across the globe. Wired magazine gives the example of a news site that depends on “an Amazon Web Services cloud server, Google tracking software and a Facebook plug-in for leaving comments”, all three operating outside Russia.

Economically speaking, due to the complex infrastructure of the Russian Internet and its strong connections with the rest of the Internet, such a test would be difficult and costly to implement. The Accounts Chamber of Russia very recently opposed this legislation on the grounds that it would lead to an increase in public expenditure to help operators implement technology and to hire additional staff at Roskomnadzor, the communications monitoring agency, which will open a center for the supervision and administration of the communication network. The Russian Ministry of Finance is also concerned about the costs associated with this project. Implementing the law could be costly for companies and encourage corruption.

Lastly, from the point of view of political freedoms, the new initiative is provoking the mobilization of citizen movements. “Sovereignty” carries even greater risks of censorship. The system would be supervised and coordinated by the state communications monitoring agency, Roskomnadzor, which already centralizes the blocking of thousands of websites, including major information websites. The implementation of this project would broaden the possibilities for traffic inspection and censorship in Russia, says the Roskomsvoboda association. As mentioned above, it could facilitate the possibility of shutting down the Internet or controlling some of its applications, such as Telegram (which the Russian government tried to block unsuccessfully in spring 2018). A similar attempt at a cut or “Internet blackout” was made in the Republic of Ingushetia as part of a mass mobilization in October 2018, when the government succeeded in cutting off traffic almost completely. A demonstration “against the isolation of the Runet” united 15,000 people in Moscow on March 10, 2019 at the initiative of multiple online freedom movements and parties, reflecting concerns expressed in society.

Is it possible to break away from the global Internet today, and what are the consequences? It is difficult to anticipate all the implications of such major changes on the global architecture of the Internet. During the discussion on the draft law in the State Duma, Deputy Oleg Nilov, from the Fair Russia party, described the initiative as a “digital Brexit” from which ordinary users in Russia will be the first to suffer. As has been seen (and studied) on several occasions in the recent past, information and communication network infrastructures have become decisive levers in the exercise of power, on which governments intend to exert their full weight. But, as elsewhere, the Russian digital space is increasingly complex, and the results of ongoing isolationist experiments are more unpredictable than ever.

[divider style=”dotted” top=”20″ bottom=”20″]

Francesca Musiani, Head Researcher at the CNRS, Institute for Communication Sciences (ISCC) Sorbonne UniversitéBenjamin Loveluck, Lecturer, Télécom ParisTech – Institut Mines-Télécom, Université Paris-SaclayFrançoise Daucé, Director of Studies, the School of Advanced Studies in Social Sciences (EHESS) and Ksenia Ermoshina, Doctor in Socio-Economics of Innovation, French National Centre for Scientific Research (CNRS)

This article was first published in French in The Conversation under a Creative Commons license. Read the original article.

camouflage, military vehicles

Military vehicles are getting a new look for improved camouflage

I’MTech is dedicating a series of articles to success stories from research partnerships supported by the Télécom & Société Numérique Carnot Institute (TSN), to which IMT Atlantique belongs.

[divider style=”normal” top=”20″ bottom=”20″]

Belles histoires, Bouton, CarnotHow can military vehicles be made more discreet on the ground? This is the question addressed by the Caméléon project of the Directorate General of Armaments (DGA), involving Nexter group and IMT Atlantique in the framework of the Télécom & Société numérique Carnot Institute. Taking inspiration from the famous lizard, researchers are developing a high-tech skin able to replicate surrounding colors and patterns.

 

Every year on July 14, the parades on the Champs Élysées show off French military vehicles in forest colors. They are covered in a black, green and brown pattern for camouflage in the wooded landscapes of Europe. Less frequently seen on the television are specific camouflages for other parts of the world. Leclerc tanks, for example, may be painted in ochre colors for desert areas, or grey for urban operations. However, despite this range of camouflage patterns available, military vehicles are not always very discreet.

There may be significant variations in terrain within a single geographical area, making the effectiveness of camouflage variable,” explains Éric Petitpas, Head of new protection technologies specializing in land defense systems at Nexter Group. Adjusting the colors to the day’s mission is not an option. Each change of paint requires the vehicle to be immobilized for several days. “It slows down reaction time when you want to dispatch vehicles for an external operation,” underlines Eric Petitpas. To overcome this lack of flexibility, Nexter has partnered with several specialized companies and laboratories, including IMT Atlantic, to help develop a dynamic camouflage. The objective is to be able to equip vehicles with technology that can adapt to its surroundings in real time.

This project, named Caméléon, was initiated by the Directorate General of Armaments (DGA) and “is a real scientific challenge“, explains Laurent Dupont, a researcher in optics at IMT Atlantique (member of the Télécom & Société numérique Carnot Institute). For scientists, the challenge lies first and foremost in fully understanding the problem. Stealth is based on the enemy’s perception. It therefore depends on technical aspects (contrast, colors, brightness, spectral band, pattern etc.) “We have to combine several disciplines, from computer science to colorimetry, to understand what will make a dynamic camouflage effective or not,” the researcher continues.

Stealth tiles

The approach adopted by the scientists is based on the use of tiles attached to the vehicles. A camera is used to record the surroundings, and an image analysis algorithm identifies the colors and patterns representative of the environment. A suitable pattern and color palette are then displayed on the tiles covering the vehicle to replicate the colors and patterns of the surrounding environment. If the vehicle is located in an urban environment, for example, “the tiles will display grey, beige, pink, blue etc. with vertical patterns to simulate buildings in the distance” explains Éric Petitpas.

To change the color of the tiles, the researchers use selective spectral reflectivity technology. Contrary to what could be expected, it is not a question of projecting an image onto the tile as though it were a TV screen. “The color changes are based on a reflection of external light, selecting certain wavelengths to display as though choosing from the colors of the rainbow,” explains Éric Petitpas. “We can selectively choose which colors the tiles will reflect and which colors will be absorbed,” says Laurent Dupont. The combination of colors reflected at a given point on the tile generates the color perceived by the onlooker.

A prototype of the new “Caméléon” camouflage was presented at the 2018 Defense Innovation Forum

This technology was demonstrated at the 2018 Defense Innovation Forum dedicated to new defense technology. A small robot measuring 50 centimeters long and covered in a skin of Caméléon tiles was presented. The consortium now wants to move on to a true-to-scale prototype. In addition to needing further development, the technology must also adapt to all types of vehicles. “For the moment we are developing the technology on a small-scale vehicle, then we will move on to a 3m² prototype, before progressing to a full-size vehicle,” says Éric Petitpas. The camouflage technology could thus be quickly adapted to other entities – such as infantrymen, for example.

New questions are emerging as technology prototypes prove their worth, opening up new opportunities to further the partnership between Nexter and ITM Atlantic that was set up in 2012. Caméléon is the second upstream study program of the DGA in which IMT Atlantic has taken part. On the technical side, researchers must now ensure the scaling up of tiles capable of equipping life-size vehicles. A pilot production line for these tiles, led by Nexter and E3S, a Brest-based SME, has been launched to meet the program’s objectives. The economic aspect should not be forgotten either. Tile covering will inevitably be more expensive than painting. However, the ability to adapt the camouflage to all types of environment is a major operational advantage that doesn’t require immobilizing the vehicle to repaint it. There are plenty of new challenges to be met before we see stealth vehicles in the field… or rather not see them!

 

[divider style=”normal” top=”20″ bottom=”20″]

A guarantee of excellence in partnership-based research since 2006

The Télécom & Société Numérique Carnot Institute (TSN) has been partnering with companies since 2006 to research developments in digital innovations. With over 1,700 researchers and 50 technology platforms, it offers cutting-edge research aimed at meeting the complex technological challenges posed by digital, energy and industrial transitions currently underway in in the French manufacturing industry. It focuses on the following topics: industry of the future, connected objects and networks, sustainable cities, transport, health and safety.

The institute encompasses Télécom ParisTech, IMT Atlantique, Télécom SudParis, Institut Mines-Télécom Business School, Eurecom, Télécom Physique Strasbourg and Télécom Saint-Étienne, École Polytechnique (Lix and CMAP laboratories), Strate École de Design and Femto Engineering.

[divider style=”normal” top=”20″ bottom=”20″]