XENON1T

XENON1T observes one of the rarest events in the universe

The researchers working on the XENON1T project observed a strange phenomenon: the simultaneous capture of two electrons by the atomic nucleus of xenon. A phenomenon so rare that it earned the scientific collaboration, which includes the Subatech[1] laboratory, a spot on the cover of the prestigious journal Nature on 25 April 2019. It was both the longest and rarest phenomenon ever to be directly measured in the universe.  Although the research team considers this observation — the first in the world — to be a success, it was not their primary aim. Dominque Thers, researcher at IMT Atlantique and head of the French portion of the XENON1T team, explains below. 

 

What is this phenomenon of a simultaneous capture of two electrons by an atomic nucleus?

Dominique Thers: In an atom, it’s possible for an electron [a negative charge] orbiting a nucleus to be captured by it. Inside the nucleus, a proton [a positive charge] will thus become a neutron [a neutral charge]. This is a known phenomenon and has already been observed. However, the theory prohibits this phenomenon for certain atomic elements. This is the case for isotope 124 of xenon, whose nucleus cannot capture a single electron. The only event allowed by the laws of physics for this isotope of xenon is the capture of two electrons at the same time, which neutralize two protons, thereby producing two neutrons in the nucleus. The Xenon 124 therefore becomes tellurium 124, another element. It’s this simultaneous double capture phenomenon that we observed, which had never been observed before.

XENON1T was initially designed to search for WIMPs, the particles that make up the mysterious dark matter of the universe. How do you go from this objective to observing the atomic phenomenon, as you were able to do?

DT: In order to observe WIMPs, our strategy is based on the exposure of two tons of liquid xenon. This xenon contains different isotopes, of which approximately 0.15% are xenon 124. That may seem like a small amount, but for two tons of liquid it represents a large number of atoms.  So there is a chance that this simultaneous double capture event will occur. When it does, the cloud of electrons around the nucleus reorganizes itself, and simultaneously emits X-rays and a specific kind of electrons known as Auger electrons. Both of these interact with the xenon and produce light using the same mechanism with which the WIMPs in dark matter react with xenon. Using the same measuring instrument as the one designed to detect WIMPs, we can therefore observe this simultaneous double capture mechanism. And it’s the energy signature of the event we measure that gives us information about the nature of the event.  In this case, the energy released was approximately twice the energy required to bind an electron to its nucleus, which is characteristic of a double capture.

To understand how the XENON1T detector works, read our dedicated article: XENON1T: A giant dark matter hunter

Were you expecting to observe these events?

DT: We did not at all build XENON1T to observe these events. However, we have a cross-cutting research approach: we knew there was a chance that the double capture would occur, and that we may be able to detect it if it did. We also knew that the community that studies atom stability and this type of phenomenon hoped to observe such an event to consolidate their theories. Several other experiments around the world are working on this. What’s funny is that one of these experiments, XMASS, located in Japan, had published a theory ruling out such an observation over a much longer period of time than what we observed. In other words, according to the previous findings of their research on double electron capture,  we weren’t supposed to observe the phenomenon with the parameters of our experiment. In reality, after re-evaluation, they were just unlucky, and could have observed it before we did with similar parameters.

One of the main characteristics of this observation, which makes it especially important, is its half-life time. Why is this?

DT : The half-life time measured is 1.8×1022 years, which corresponds to 1,000 billion times the age of the universe. To put it simply, within a sample of xenon 124, it takes billions of billions of years before this decay occurs for half of the atoms. So it’s an extremely rare process. It’s the phenomenon with the longest half-live ever directly observed in the universe; half-life times longer than that have only been deduced indirectly. What’s important to understand, behind all this information, is that successfully observing such a rare event means that we understand the matter that surrounds us very well.  We wouldn’t have been able to detect this double capture if we hadn’t understood our environment with such precision.

Beyond this discovery, how does this contribute to your search for dark matter?

DT: The key added-value of this result is that it reassures us about the analysis we’re carrying out. It’s challenging to search for dark matter without ever achieving positive results. We’re often confronted with doubt, so seeing that we have a positive signal with a specific signature that has never been observed until now is reassuring. This proves that we’re breaking new ground, and encourages us to remain motivated. It also proves the usefulness of our instrument and attests to the quality and precision of our calibration campaigns.

Read on I’MTech: Even without dark matter Xenon1T is a success

What’s next for XENON1T?

DT: We still have an observation campaign to analyze since our approach is to let the experiment  run for several months without human interference, then to discover and analyze the measurements to look for results. We improved the experiment in the beginning of 2019 to further increase the sensitivity of our detector. XENON1T initially contained a ton of xenon, which explains its name. At present, it has more than double that amount and by the end of the experiment, it will be called XENONnT and will contain 8 tons of xenon. This will allow us to obtain a sensitivity limit of detection for WIMPs which is ten times lower, in the hope of finally detecting these dark matter particles.

[1] The Subatech laboratory is a joint research unit between IMT Atlantique/CNRS/University of Nantes.

 

Vincent Thiéry

IMT Nord Europe | Civil Engineering, Environment, Geosciences, Geomaterials

Vincent Thiéry is geologist, currently associate professor (qualified to conduct research) at IMT Nord Europe. His research interests deal with microstructural investigations of geomaterials based on optical microscopy, scanning electron microscopy, confocal laser scanning microscopy… Based on those tools, his research thematics cover a wide spectrum ranging from old cements (framework from the CASSIS project) to more fundamental geological interests (first finding of microdiamonds in metropolitan France) as well as industrial applications as polluted soil remediation using hydraulic binders. In his field of expertise, he insists on the need to break the boundaries between fundamental and engineering sciences in order to enrich each other.

[toggle title=”Find his articles on I’MTech” state=”open”]

[/toggle]

cements

In search of forgotten cements

Out of the 4 billion tons of cement produced every year, the overwhelming majority is Portland cement.  Invented over 200 years ago in France by Louis Vicat — then patented by Englishman Joseph Aspdin —Portland is a star in the world of building materials. Its almost unparalleled durability has allowed it to outperform its competitors, so much so that the synthesis methods and other cement formulations used in the 19th and early 20th centuries have since been forgotten. Yet, buildings constructed with these cements still stand today, and cannot be restored using Portland, which has become a monopolistic cement. In a quest to retrieve this lost technical expertise, Vincent Thiéry, a researcher at IMT Lille Douai, has launched the Cassis[1] project. In the following interview, he presents his research at the border between history and materials.

 

How can we explain that the cement industry is now dominated by a single product: Portland cement?

Vincent Thiery: Cement as we know it today was invented in 1817 by a young bridge engineer:  Louis Vicat. He needed a material that had high mechanical strength, which would set and have strong durability under water, in order to build the Souillac bridge over the Dordogne river. He therefore developed a cement based on limestone and clay fired at 1,500°C, which would later be patented by an Englishman, Joseph Aspdin, under the name Portland Cement in 1824. The performance of Portland cement gradually made it the leading cement. In 1856, the first French industrial cement plant to produce Portland cement opened in Boulogne-sur-Mer. By the early 20th century, the global market was already dominated by this cement.

What has become of the other cements that coexisted with Portland cement between its invention and its becoming the sole standard cement?

VT: Some of these other cements still exist today. One such example is Prompt cement, which is also called Roman cement — its ochre color reminded its inventor of Roman buildings. It’s an aesthetic restoration cement invented in 1796 by Englishman James Parker. It’s starting to gain popularity again today since it emits less CO2 into the atmosphere and can be mixed with plant fibers to make more environmentally-friendly cements. But it’s one of the few cements that still exist along with Portland cement. The majority of the other cements stopped being produced altogether as of the late 19th or early 20th centuries.

Have these cements always had the same formulation as they do today?

VT: No, they evolved over the course of the second half of the 19th century and were gradually modified. For example, the earliest Portland cements, known as “meso-Portland cements,”  were rich in aluminum. There was a wider range of components at that time than there is today.  These cements can still be found across France, in railway structures or old abandoned bridges.  Although they are there, right before our eyes, these cements are little-known. We don’t know their exact formulation or the processes used to produce them. This is the aim of the CASSIS project – to recover this knowledge of old cements. The Boulogne-sur-Mer region in which we’ll be working should provide us with many examples of buildings made with these old cements, since it was where cement started its industrial rise in France.  In the Marseille region, for example, research similar to that which we plan to conduct was carried out by the concrete division of the French Historic Monument Research Laboratory (LRMH), one of our partners for the CASSIS project. This research helped trace the history of many “local” cements.

How do you go about finding these cements?

VT: The LRMH laboratory is part of the Ministry of Culture. It directly contacts town halls and private individuals who own buildings known to be made with old concretes. This work combines history and archeology since we also look for archival documents to provide information about the old buildings, then we visit the site to make observations. Certain advertising documents from the period, which boast about the structures, can be very helpful. In the 1920s, for example, cement manufacturer Lafarge (which has since become LafargeHolcim) published a catalogue describing the uses of some of its cements, supported by photos and recommendations.

Once a structure has been identified as incorporating forgotten cements, how do you go back in time to deduce the composition and processes used?

VT: It ‘s a matter of studying the microstructure of the material.  We set up an array of analyses using rather conventional techniques in the field of mineralogy: optical or scanning electron microscope, Raman spectroscopy, X-ray diffraction etc. This allows us to detect mineralogical changes that appear during the firing of clay and limestone. This study provides us with a great deal of information: how the material was fired, at what temperature and for how long, as well as whether the clay and limestone were ground finely before firing. Certain characteristics of the microstructure can only be observed if the temperature has exceeded a certain level, or if the cement was fired very quickly. As part of the CASSIS project, we’ll also be using nuclear magnetic resonance since the hydrates— which form when the cement sets — are poorly crystallized.

It is this, along with other types of microstructural evidence within a cement paste, whether mortar or concrete, that makes it possible to gain valuable insight into the nature of the cement used. It is a relic of a speck of non-hydrated cement in a mortar from the early 1880s. To make these observations, samples are prepared in thin sections (30 micrometers thick) in order to be studied with an optical microscope. The compilation of observations and analyses of these samples provides information about the nature of the raw mix (the mix before firing) used to make the cement, its fuel and firing conditions:  the same approach will be used for the Cassis project.

 

Do you have a way of verifying if your deductions are correct?

VT: Once we’ve deduced the possible scenarios for the mix and process used to obtain a cement, we plan to carry out tests to verify our hypotheses.  For the project, we will try to resynthesize the forgotten cements using the information we have identified. We even hope to equip ourselves with a vertical cast-iron kiln to reproduce firing from the period, which is marked by the irregularities of the firing conditions in the kiln. By comparing the cement obtained through these experiments with the cement in the structures we’ve identified, we can verify our hypotheses.

Why are you trying to recover the composition of these old cements? What is the point of this work since Portland cement is considered to be the best?  

VT: First of all, there’s the historical value: this research allows us to recover a forgotten technical culture. We don’t know much about the shift from Roman cement to Portland cement in the industry of the period. By studying the other cements that existed at the time of this shift, we may be able to better understand how builders gradually transitioned from one to the other. Furthermore, this research may be of interest to current industry players. Restoration work on structures built with forgotten cements is no easy matter: the new cement to be applied must first be found to be compatible with the old cement to ensure strong durability. So from a cultural heritage perspective, it’s important to be able to produce small quantities of cement adapted to specific restoration work.

 

[1] The CASSIS project is funded by the I-SITE Foundation (Initiatives-Science – Innovation –Territories– Economy) at the University of Lille Nord Europe. It brings together IMT Lille Douai, the French Historic Monument Research Laboratory (LRMH) of the Ministry of Culture, Centrale Lille, the Polytechnic University of Hauts-de-France, and the technical association of the hydraulic binders industry.

 

digital identity

The ethical challenges of digital identity

Article written in partnership with The Conversation.
By Armen Khatchatourov and Pierre-Antoine ChardelInstitut Mines-Télécom Business School

[divider style=”normal” top=”20″ bottom=”20″]

[dropcap]T[/dropcap]he GDPR recently came into effect, confirming Europe’s role as an example in personal data protection. However, we must not let it dissuade us from examining issues of identity, which have been redefined in this digital era. This means thinking critically about major ethical and philosophical issues that go beyond the simple question of the protection of personal information and privacy.

Current data protection policy places an emphasis on the rights of the individual. But it does not assess the way in which our free will is increasingly restricted in ever more technologically complex environments, and even less the effects of the digital metamorphosis on the process of subjectification, or the individual’s self-becoming. In these texts, more often than not, we consider the subject as already constituted, capable of exercising their rights, with their own free will and principles. And yet, the characteristic of digital technology, as proposed here, is that it contributes to creating a new form of subjectivity: constantly redistributing the parameters of constraints and incitation, creating the conditions for increased individual malleability. We outline this process in the work Les identités numériques en tension (Digital Identities in Tension), written under the Values and Policies of Personal Information Chair at IMT.

The resources established by the GDPR are clearly necessary in supporting individual initiative and autonomy in managing our digital lives. Nonetheless, the very notions of the user’s consent and control over their data on which the current movement is based are problematic. This is because there are two ways of thinking, which are distinct, yet consistent with one another.

New visibility for individuals

Internet users seem to be becoming more aware of the traces they leave, willingly or not, during their online activity (connection metadata, for example). This may serve as support for the consent-based approach. However, this dynamic has its limits.

Firstly, the growing volume of information collected makes the notion of systematic user consent and control unrealistic, if only due to the cognitive overload it would induce. Also, changes in the nature of technical collection methods, as demonstrated by the advent of connected objects, has led to the increase of sensors collecting data even without the user realizing. The example of video surveillance combined with facial recognition is no longer a mere hypothesis, along with the knowledge operators acquire from these data. This is a sort of layer of digital identity whose content and various possible uses are entirely unknown to the person it is sourced from.

What is more, there is a strong tendency for actors, both from the government and the private sector, to want to create a full, exhaustive description of the individual, to the point of reducing them to a long list of attributes. Under this new power regime, what is visible is reduced to what can be recorded as data, the provision of human beings as though they were simple objects.

Vidéo de surveillance. Mike Mozart/Wikipedia, CC BY

Surveillance video. Mike Mozart/Wikipedia, CC BY.

 

The ambiguity of control

The second approach at play in our ultra-modern societies concerns the application of this paradigm based on protection and consent within the mechanisms of a neo-liberal society. Contemporary society combines two aspects of privacy: considering the individual as permanently visible, and as individually responsible for what can be seen about them. This set of social standards is reinforced each time the user gives (or opposes) consent to the use of their personal information. At each iteration, the user reinforces their vision of themselves as the author and person responsible for the circulation of their data. They also assume control over their data, even though this is no more than an illusion. They especially assume responsibility for calculating the benefits that sharing data can bring. In this sense, the increasing and strict application of the paradigm of consent may be correlated with the perception of the individual becoming more than just the object of almost total visibility. They also become a rational economic agent, capable of analyzing their own actions in terms of costs and benefits.

This fundamental difficulty means that the future challenges for digital identities imply more than just providing for more explicit control or more enlightened consent. Complementary approaches are needed, likely related to users’ practices (not simply their “uses”), on the condition that such practices bring about resistance strategies for circumventing the need for absolute visibility and definition of the individual as a rational economic agent.

Such digital practices should encourage us to look beyond our understanding of social exchange, whether digital or otherwise, under the regime of calculating potential benefits or external factors. In this way, the challenges of digital identities far outweigh the challenges of protecting individuals or those of “business models”, instead affecting the very way in which society as a whole understands social exchange. With this outlook, we must confront the inherent ambivalence and tension of digital technologies by looking at the new forms of subjectification involved in these operations.  A more responsible form of data governance may arise from such an analytical exercise.

[divider style=”normal” top=”20″ bottom=”20″]

Armen Khatchatourov, Lecturer-Researcher, Institut Mines-Télécom Business School and Pierre-Antoine Chardel, Professor of social science and ethics, Institut Mines-Télécom Business School

This article has been republished from The Conversation under a Creative Commons license. Read the original article here.

 

Russian internet

Digital sovereignty: can the Russian Internet cut itself off from the rest of the world?

This article was originally published in French in The Conversation, an international collaborative news website of scientific expertise, of which IMT is a partner. 

Article written by Benjamin Loveluck (Télécom ParisTech), Francesca Musiani (Sorbonne Université), Françoise Daucé (EHESS), and Ksenia Ermoshina (CNRS).

[divider style=”normal” top=”20″ bottom=”20″]

[dropcap]T[/dropcap]he Internet infrastructure is based on the principle of the internationalization of equipment and data and information flows. Elements of the Internet with a geographic location in national territories need physical and information resources hosted in other territories to be able to function. However, in this globalized context, Russia has been working since 2012 to gradually increase national controls on information flows and infrastructure, in an atmosphere of growing political mistrust towards protest movements within the country and its international partners abroad. Several laws have already been passed in this regard, such as the one in force since 2016 requiring companies processing data from Russian citizens to store them on national territory, or the one regulating the use of virtual private networks (VPNs), proxies and anonymization tools in force since 2017.

In February 2019, a bill titled “On the isolation of the Russian segment of the Internet” was adopted at first reading in the State Duma (334 votes for and 47 against) on the initiative of Senators Klichas and Bokova and Deputy Lugovoi. The accompanying memo of intent states that the text is a response to the “aggressive nature of the United States National Cybersecurity Strategy” adopted in September 2018. The project focuses on two main areas: domain name system control (DNS, the Internet addressing system) and traffic routing, the mechanism that selects paths in the Internet network for data to be sent from a sender to one or more recipients.

Russia wants to free itself from foreign constraints

The recommendations notably include two key measures. The first is the creation by Russia of its own version of the DNS in order to be able to operate if links to servers located abroad are broken, since none of the twelve entities currently responsible for the DNS root servers are located on Russian territory. The second is for Internet Service Providers (ISPs) to demonstrate that they are able to direct information flows exclusively to government-controlled routing points, which should filter traffic so that only data exchanged between Russians reaches its destination.

This legislation is the cornerstone of the Russian government’s efforts to promote their “digital sovereignty”. According to Russian legislators, the goal is to develop a way of isolating the Russian Internet on demand, making it possible to respond to the actions of foreign powers with self-sufficiency and to guarantee continued functioning. On the other hand, this type of configuration would also facilitate the possibility of blocking all or part of communications.

The Russian state is obviously not the only one aiming for better control of the network. Iran has been trying to do the same thing for years, as has China with the famous Great Firewall of China. Many states are seeking to reinforce their authority over “their” Internet, to the point of partially or totally cutting off the network (measures known as “shutdowns” or “kill switches”) in some cases. This was the case in Egypt during the 2011 revolution as well as more recently in Congo during the elections. It is also regularly the case in some parts of India.

In connection with these legislative projects, a recent initiative, published on February 12 by the Russian news agency TASS, has attracted particular attention. Under the impetus of the Russian State, a group uniting the main public and private telecommunications operators (led by Natalya Kasperskaya, co-founder of the well-known security company Kaspersky), has decided to conduct a test in order to temporarily cut off the Russian Internet from the rest of the globalized network and in particular the World Wide Web. This will in principle happen before April 1, the deadline for amendments to the draft law, requiring Russian internet providers to be able to guarantee their ability to operate autonomously from the rest of the network.

Technical, economic and political implications

However, beyond the symbolic significance of empowerment through the disconnection of such a major country, there are many technical, economic, social and political reasons why such attempts should not be made, for the sake of the Internet on both an international and national scale.

From a technical point of view, even if Russia tries to prepare as much as possible for this disconnection, there will inevitably be unanticipated effects if it seeks to separate itself from the rest of the global network, due to the degree of interdependence of the latter across national borders and at all protocol levels. It should be noted that, unlike China which has designed its network with a very specific project of centralized internal governance, Russia has more than 3,000 ISPs and a complex branched-out infrastructure with multiple physical and economic connections with foreign countries. In this context, it is very difficult for ISPs and other Internet operators to know exactly how and to what extent they depend on other infrastructure components (traffic exchange points, content distribution networks, data centers etc.) located beyond their borders. This could lead to serious problems, not only for Russia itself but also for the rest of the world.

In particular, the test could pose difficulties for other countries that route traffic through Russia and its infrastructure, something which is difficult to define. The effects of the test will certainly be sufficiently studied and anticipated to prevent the occurrence of a real disaster like a long-term compromise of the functioning of major infrastructures such as transport. More likely consequences are the malfunctioning or slowdown of websites frequently used by the average user. Most of these websites operate from multiple servers located across the globe. Wired magazine gives the example of a news site that depends on “an Amazon Web Services cloud server, Google tracking software and a Facebook plug-in for leaving comments”, all three operating outside Russia.

Economically speaking, due to the complex infrastructure of the Russian Internet and its strong connections with the rest of the Internet, such a test would be difficult and costly to implement. The Accounts Chamber of Russia very recently opposed this legislation on the grounds that it would lead to an increase in public expenditure to help operators implement technology and to hire additional staff at Roskomnadzor, the communications monitoring agency, which will open a center for the supervision and administration of the communication network. The Russian Ministry of Finance is also concerned about the costs associated with this project. Implementing the law could be costly for companies and encourage corruption.

Lastly, from the point of view of political freedoms, the new initiative is provoking the mobilization of citizen movements. “Sovereignty” carries even greater risks of censorship. The system would be supervised and coordinated by the state communications monitoring agency, Roskomnadzor, which already centralizes the blocking of thousands of websites, including major information websites. The implementation of this project would broaden the possibilities for traffic inspection and censorship in Russia, says the Roskomsvoboda association. As mentioned above, it could facilitate the possibility of shutting down the Internet or controlling some of its applications, such as Telegram (which the Russian government tried to block unsuccessfully in spring 2018). A similar attempt at a cut or “Internet blackout” was made in the Republic of Ingushetia as part of a mass mobilization in October 2018, when the government succeeded in cutting off traffic almost completely. A demonstration “against the isolation of the Runet” united 15,000 people in Moscow on March 10, 2019 at the initiative of multiple online freedom movements and parties, reflecting concerns expressed in society.

Is it possible to break away from the global Internet today, and what are the consequences? It is difficult to anticipate all the implications of such major changes on the global architecture of the Internet. During the discussion on the draft law in the State Duma, Deputy Oleg Nilov, from the Fair Russia party, described the initiative as a “digital Brexit” from which ordinary users in Russia will be the first to suffer. As has been seen (and studied) on several occasions in the recent past, information and communication network infrastructures have become decisive levers in the exercise of power, on which governments intend to exert their full weight. But, as elsewhere, the Russian digital space is increasingly complex, and the results of ongoing isolationist experiments are more unpredictable than ever.

[divider style=”dotted” top=”20″ bottom=”20″]

Francesca Musiani, Head Researcher at the CNRS, Institute for Communication Sciences (ISCC) Sorbonne UniversitéBenjamin Loveluck, Lecturer, Télécom ParisTech – Institut Mines-Télécom, Université Paris-SaclayFrançoise Daucé, Director of Studies, the School of Advanced Studies in Social Sciences (EHESS) and Ksenia Ermoshina, Doctor in Socio-Economics of Innovation, French National Centre for Scientific Research (CNRS)

This article was first published in French in The Conversation under a Creative Commons license. Read the original article.

camouflage, military vehicles

Military vehicles are getting a new look for improved camouflage

I’MTech is dedicating a series of articles to success stories from research partnerships supported by the Télécom & Société Numérique Carnot Institute (TSN), to which IMT Atlantique belongs.

[divider style=”normal” top=”20″ bottom=”20″]

Belles histoires, Bouton, CarnotHow can military vehicles be made more discreet on the ground? This is the question addressed by the Caméléon project of the Directorate General of Armaments (DGA), involving Nexter group and IMT Atlantique in the framework of the Télécom & Société numérique Carnot Institute. Taking inspiration from the famous lizard, researchers are developing a high-tech skin able to replicate surrounding colors and patterns.

 

Every year on July 14, the parades on the Champs Élysées show off French military vehicles in forest colors. They are covered in a black, green and brown pattern for camouflage in the wooded landscapes of Europe. Less frequently seen on the television are specific camouflages for other parts of the world. Leclerc tanks, for example, may be painted in ochre colors for desert areas, or grey for urban operations. However, despite this range of camouflage patterns available, military vehicles are not always very discreet.

There may be significant variations in terrain within a single geographical area, making the effectiveness of camouflage variable,” explains Éric Petitpas, Head of new protection technologies specializing in land defense systems at Nexter Group. Adjusting the colors to the day’s mission is not an option. Each change of paint requires the vehicle to be immobilized for several days. “It slows down reaction time when you want to dispatch vehicles for an external operation,” underlines Eric Petitpas. To overcome this lack of flexibility, Nexter has partnered with several specialized companies and laboratories, including IMT Atlantic, to help develop a dynamic camouflage. The objective is to be able to equip vehicles with technology that can adapt to its surroundings in real time.

This project, named Caméléon, was initiated by the Directorate General of Armaments (DGA) and “is a real scientific challenge“, explains Laurent Dupont, a researcher in optics at IMT Atlantique (member of the Télécom & Société numérique Carnot Institute). For scientists, the challenge lies first and foremost in fully understanding the problem. Stealth is based on the enemy’s perception. It therefore depends on technical aspects (contrast, colors, brightness, spectral band, pattern etc.) “We have to combine several disciplines, from computer science to colorimetry, to understand what will make a dynamic camouflage effective or not,” the researcher continues.

Stealth tiles

The approach adopted by the scientists is based on the use of tiles attached to the vehicles. A camera is used to record the surroundings, and an image analysis algorithm identifies the colors and patterns representative of the environment. A suitable pattern and color palette are then displayed on the tiles covering the vehicle to replicate the colors and patterns of the surrounding environment. If the vehicle is located in an urban environment, for example, “the tiles will display grey, beige, pink, blue etc. with vertical patterns to simulate buildings in the distance” explains Éric Petitpas.

To change the color of the tiles, the researchers use selective spectral reflectivity technology. Contrary to what could be expected, it is not a question of projecting an image onto the tile as though it were a TV screen. “The color changes are based on a reflection of external light, selecting certain wavelengths to display as though choosing from the colors of the rainbow,” explains Éric Petitpas. “We can selectively choose which colors the tiles will reflect and which colors will be absorbed,” says Laurent Dupont. The combination of colors reflected at a given point on the tile generates the color perceived by the onlooker.

A prototype of the new “Caméléon” camouflage was presented at the 2018 Defense Innovation Forum

This technology was demonstrated at the 2018 Defense Innovation Forum dedicated to new defense technology. A small robot measuring 50 centimeters long and covered in a skin of Caméléon tiles was presented. The consortium now wants to move on to a true-to-scale prototype. In addition to needing further development, the technology must also adapt to all types of vehicles. “For the moment we are developing the technology on a small-scale vehicle, then we will move on to a 3m² prototype, before progressing to a full-size vehicle,” says Éric Petitpas. The camouflage technology could thus be quickly adapted to other entities – such as infantrymen, for example.

New questions are emerging as technology prototypes prove their worth, opening up new opportunities to further the partnership between Nexter and ITM Atlantic that was set up in 2012. Caméléon is the second upstream study program of the DGA in which IMT Atlantic has taken part. On the technical side, researchers must now ensure the scaling up of tiles capable of equipping life-size vehicles. A pilot production line for these tiles, led by Nexter and E3S, a Brest-based SME, has been launched to meet the program’s objectives. The economic aspect should not be forgotten either. Tile covering will inevitably be more expensive than painting. However, the ability to adapt the camouflage to all types of environment is a major operational advantage that doesn’t require immobilizing the vehicle to repaint it. There are plenty of new challenges to be met before we see stealth vehicles in the field… or rather not see them!

 

[divider style=”normal” top=”20″ bottom=”20″]

A guarantee of excellence in partnership-based research since 2006

The Télécom & Société Numérique Carnot Institute (TSN) has been partnering with companies since 2006 to research developments in digital innovations. With over 1,700 researchers and 50 technology platforms, it offers cutting-edge research aimed at meeting the complex technological challenges posed by digital, energy and industrial transitions currently underway in in the French manufacturing industry. It focuses on the following topics: industry of the future, connected objects and networks, sustainable cities, transport, health and safety.

The institute encompasses Télécom ParisTech, IMT Atlantique, Télécom SudParis, Institut Mines-Télécom Business School, Eurecom, Télécom Physique Strasbourg and Télécom Saint-Étienne, École Polytechnique (Lix and CMAP laboratories), Strate École de Design and Femto Engineering.

[divider style=”normal” top=”20″ bottom=”20″]

visual impairments

Virtual reality improving the comfort of people with visual impairments

People suffering from glaucoma or retinitis pigmentosa develop increased sensitivity to light and gradually lose their peripheral vision. These two symptoms cause discomfort in everyday life and limit the social activity of the people affected. The AUREVI research project involving IMT Mines Alès aims to improve the quality of life of visually-impaired people with the help of a virtual reality headset.

 

Retinitis pigmentosa and glaucoma are degenerative diseases of the eye. While they have different causes, they result in similar symptoms: increased sensitivity to changes in light and gradual loss of peripheral vision. The AUREVI research project was launched in 2013 in order to help overcome these deficiencies. Over 6 years, the project has brought together researchers from IMT Mines Alès and Institut ARAMAV in Nîmes, which specializes in rehabilitation and physiotherapy for people with visual impairments. Together, the two centers are developing a virtual reality-based solution to improve the daily lives of patients with retinitis pigmentosa or glaucoma.

“For these patients, any light source can cause discomfort” explains Isabelle Marc, a researcher in image processing at IMT Mines Alès, working on the AUREVI project. A computer screen, for example, can dazzle them. When walking around outdoors, the changes in light between shady and bright areas, or even breaks in the clouds, can be violently dazzling. “For visually impaired people, it takes much longer for the eye to adjust to different levels of light than it does for healthy people” the researcher adds. “While it usually takes a few seconds before we can open our eyes after being dazzled or to be able to see better in a shady area, these patients need several tens of seconds, sometimes several minutes.

Controlling light

With the help of a virtual reality headset, the AUREVI team offers greater control over light levels for visually impaired people with retinitis or glaucoma. Cameras display the image which would normally be seen by the eyes on the screens of the headset. When there is a sudden change in the light, image processing algorithms alter the brightness of the image in order to keep it constant in the patient’s eyes. For the researchers, the main difficulty with this tool is the delay. “We would like it to be effective in real time. We are aiming for the shortest delay between what appears on the screen of the headset and what the user really sees” says Isabelle Marc. The team is therefore using logarithmic cameras, which record HDR (High Dynamic Range) images directly, thus reducing the processing time.

The headset is designed to replace the dark glasses usually worn by people with this type of pathology. “It’s a pair of adaptive dark sunglasses. The shade varies pixel by pixel” Isabelle Marc explains. An advantage of this tool is that it can be calibrated to suit each patient. Depending on the stage of the retinitis or glaucoma, the level of sensitivity will be different. This can be accounted for in the way the images are processed. To do so, scientists have developed a specific test for evaluating the degree of light a person can bear. “This test could be used to configure the tool and adapt it optimally for each user” says the researcher.

The first clinical trials of the headset began on fifteen people in 2016. The initial goal was to measure the light levels considered as comfortable by each person, and to gather feedback on the comfort of the tool, before looking at evaluating the service provided to people with visual impairments. For this, the researchers create variations in the brightness of a screen for patients wearing a headset, who then give their feedback. Isabelle Marc reports that “the initial feedback from patients shows that they prefer the headset over other tools for controlling light levels”. However, the testers also commented on the bulk of the tool. “For now, we are working with the large headsets available on the market, which are not designed to be worn when you are walking around” the researcher concedes. “We are currently looking for industrial partners who could help us make the shift to a pre-series prototype more suitable for walking around with.

Showing what patients can’t see

Being able to control light levels is a major improvement in terms of visual comfort for patients, but the researchers want to take things even further. The AUREVI project aims to compensate for another symptom caused by glaucoma and retinitis: loss of stereo vision. Patients gradually lose degrees in their visual field, reaching 2 or 3 degrees at around 60 years old, or even full blindness. Before this last stage comes an important step in the progression of the handicap, as Isabelle Marc describes: “Once the vision goes below 20 degrees of the visual field, the images in the eye no longer cross over, and the brain cannot reconstruct the 3D information.

Par des techniques de traitement d'image, le projet AUREVI veut donner aux personnes malvoyantes des indications sur les obstacles à proximité.

Using image processing techniques, the AUREVI project hopes to give people with visual impairments indications about nearby objects

 

Without stereoscopic vision, the patient can no longer perceive depth of field. One of the future steps of the project will be to incorporate a feature into the headset to compensate for this deficiency in three-dimensional vision. The researchers are working on methods for communicating information on depth. They are currently looking at the idea of displaying color codes. A close object would be colored red, for example, and a far object blue. As well as improving comfort, this feature would also provide greater safety. Patients suffering from an advanced stage of glaucoma or retinitis do not see objects above their head which could hurt them, nor do they see those at their feet which are a tripping hazard.

Losing information about their surroundings gives people with visual impairments the feeling of being in danger, which increases as the symptoms get worse. Combined with an increasing discomfort with changes in light levels, this fear can often lead to social exclusion. “Patients tend to go out less, especially outdoors” notes Isabelle Marc. “On a professional level, their refusal to participate in activities outside of work with their colleagues is often misunderstood. They may still have good sight for reading, for example, and so people with normal sight may have a hard time understanding their handicap. Therefore, their social circle gradually shrinks. The headset developed by the AUREVI project is an opportunity to improve social integration for people with visual impairments. For this reason, it receives financial support from several companies as part of their disabilities and diversity missions, in particular: Sopra Steria, Orano, Crédit Agricole and Thalès. The researchers rely on this aid for people with disabilities in developing their project.

Qualcomm

Qualcomm, EURECOM and IMT joining forces to prepare the 5G of the future

Belles histoires, Bouton, Carnot5G is moving into the second phase of its development, which will bring a whole host of new technological challenges and innovations. Research and industry stakeholders are working hard to address the challenges posed by the next generation of mobile communication. In this context, Qualcomm, EURECOM and IMT recently signed a partnership agreement also including France Brevets. What is the goal of the partnership? To better prepare 5G standards and release technologies from laboratories as quickly as possible. Raymond Knopp, a researcher in communication systems at EURECOM, presents the content and challenges of this collaboration.

 

What do you gain from the partnership with Qualcomm and France Brevets?

Raymond Knopp: As researchers, we work together on 5G technologies. In particular, we are interested in those which are closely examined by 3GPP, the international standards organization for telecommunication technologies. In order to apply our research outside our laboratories, many of our projects are carried out in collaboration with industrial partners. This gives us more relevance in dealing with the real-world problems facing technology. Qualcomm is one of these industrial partners and is one of the most important companies in the generation of intellectual property in 4G and 5G systems. In my view, it is also one of the most innovative in the field. The partnership with Qualcomm gives us a more direct impact on technology development. With additional support from France Brevets, we can play a more significant role in defining the standards for 5G. We have a lot to learn from the intellectual property generation, and these partners provide us with this knowledge.

What technologies are involved in the partnership?

RK: 5G is currently moving into its second phase. The first phase was aimed at introducing new network architecture aspects and new frequencies. This meant increasing the frequency bands by about 5 or 6 times. This phase is now operational, and so innovations are secondary. The technologies we are working on now are mainly for the second phase. It is oriented more towards private networks, for applications involving machines and vehicles, new network control systems, etc. Priority will be given to network division and software-defined network (SDN) technologies, for example. This is also the phase in which low latency and very highly robust communication will be developed. This is the type of technology we are working on under this partnership.

Are you already thinking of the implementation of the technologies developed in this second phase?

RK: For now, our work on implementation is very much aimed at the first-phase technologies. We are involved in the H2020, 5Genesis and 5G-Eve projects, for conducting tests on 5G, both for mobile terminals and the network side of things. These trials involve our platform OpenAirInterface. For now, the implementation of second-phase technologies is not a priority. Nevertheless, intellectual property and any standards generated in the partnership with Qualcomm could potentially undergo implementation tests on our platform. However, it will be some time before we reach that stage.

What does a partnership with an industrial group like this represent for an academic researcher like yourself?

RK: It is an opportunity to close the loop between research, prototyping, standards and industrialization, and to see our work applied directly to the 5G technologies we will be using tomorrow. In the academic world in general, we tend to be uni-directional. We write publications, and some of them contain issues that could be included in standards, but this isn’t done and they are left accessible to everyone. Of course, companies go on to use them without our involvement, which is a pity. By setting up partnerships like this one with Qualcomm, we learn to appreciate the value of our technologies and developing them together. I hope it will encourage more researchers to the same. The field of academic research in France needs to be aware of the importance of closely following the standards and industrialization process!

 

platforms

Another type of platform is possible: a cooperative approach

Article written in partnership with The Conversation France
By Mélissa Boudes (Institut Mines-Télécom Business School), Guillaume Compain (Université Paris Dauphine – PSL), Müge Ozman (Institut Mines-Télécom Business School)

[divider style=”normal” top=”20″ bottom=”20″]

[dropcap]S[/dropcap]o-called collaborative platforms have been very popular since their appearance in the late 2000s. They are now heavily criticized, driving some of their users to take collective action. There is growing concern surrounding the use of personal data, but also the ethics of algorithms. Besides their technological functioning, the broader socio-economic model of these platforms is hotly debated. They are designed to generate value for their users by organizing peer-to-peer transactions. Some of the more dominant platforms charge high fees for their role as an intermediary. The platforms are also accused of dodging labor laws, with their high use of independent workers, practicing tax optimization or contributing to the growing commodification of our everyday lives.

From collaboration to cooperation

Though it is easy to criticize, creating alternatives is far more complicated. However, some initiatives are emerging. The international movement towards more cooperative platforms, launched in 2014 by Trebor Scholz at the New School in New York, promotes the creation of more ethical, fairer platforms. The idea is simple: why would platform users delegate intermediation to third-party companies which gain from the economic value of their exchanges when they could manage the platforms themselves?

The solution would be to adopt a cooperative model. In other words, to create platforms that are owned by their users and apply a democratic operating model, in which each co-owner has a voice, independent of their contribution of capital. In addition, an obligation to reinvest a proportion of the profit into the project, with no way of making a capital gain by selling shares, thus avoiding financial speculation.

Many experiments are underway around the world. For instance, Fairmondo, a German marketplace for fair trade products, allows users a share in the cooperative. Though not exhaustive, the list drawn up by the Platform Cooperativism Consortium gives an overview of the scope of the movement.

Although the creators of cooperative platforms are willing to create alternatives to a concentrated, or even oligopolistic platform economy in some sectors, they come up against many challenges, particularly in terms of governance, economic models and technological infrastructure.

Many challenges

Based on our work on action research in the French network of cooperative platforms, Plateformes en communs, and an analysis of various foreign cases, we have identified a number of characteristics and limitations of alternative platforms.

Fairmondo, a German marketplace for fair trade products. Screenshot.

 

While they share a common opposition to major commercial platforms, there is no typical model for cooperative platforms, rather a multitude of experiments which are still in their early stages, with very different structures and modes of operation. Some were a natural progression from the movement against uberization, like Coopcycle, while others were created by digital entrepreneurs searching for meaning, or by modernized social and solidarity economy organizations (ESS).

There are many challenges for these cooperative platforms, which have high social and economic ambitions and do not have pre-defined futures. Here we will focus on three major challenges: finding long-lasting economic and financial models, uniting communities, mobilizing supporters and partners.

Making economic models durable

In a highly competitive context, there is no margin for error for alternative platforms. To attract users, they have to offer high-quality services, including an exhaustive offering, efficient contact, simple use, and attractive aesthetics. However, it is difficult for cooperative platforms to attract investors, as being cooperatives or associations, they are generally not particularly lucrative. In addition, some opt to open up their assets, allowing open access to their computer code, for instance.

But while the creators of alternative digital platforms are entrepreneurs, their economic models remain more of an iteration than a business plan. Many cooperative platforms, still in the developmental stages, rely primarily on voluntary work (made possible by external income: second jobs, personal savings, unemployment benefits, social welfare payments) which may run out if the platform does not manage to create salaries and/or attract new contributors.

Creating a community

The importance of creating a committed community to support the platform is primordial, both for its daily operations and its development, especially given that the economy of platforms relies on network effects: the more people or organizations a platform brings together, the more new ones it will also attract, as it will offer great opportunities to its users. It is therefore difficult for alternative platforms to penetrate sectors where there are already dominant actors.

Cooperative platforms try to differentiate themselves by creating communities which have input into the way the platform is run. Some, like Open Food France, specializing in local food distribution networks, have gone as far as broadening their community of cooperators to include public and private partners, and end consumers. This gives them a way to express their societal aspirations through their economic choices.

The founders of Oiseaux de passage, a cooperative platform offering local tourism services, also opted for a broader view of membership. They chose the legal status of Société coopérative d’intérêt collectif (a collective-interest cooperative), enabling several categories of stakeholders (tourism professionals, inhabitants, tourists) to hold shares in a collective company.

These cooperative platforms thus adopt an ecosystem-based approach, including all stakeholders that are naturally drawn to them. However, for the moment, user commitment remains low and project leaders are often overworked.

Stopping the movement being hijacked

Cooperative platforms are still in their youth, and struggle to gain the support they so desperately need. Financially speaking, their unstable models are insufficient in attracting public organizations and ESSs, which prefer to work with more stable, profitable commercial platforms. The other obstacle is political in nature. In the fight against uberization, cooperative platforms present themselves as alternatives, whereas for the time being, public authorities seem to favor social dialog with the dominant platforms.

Cooperative platforms are almost left to their own devices, compensating for the lack of support by trying to join forces though a peer network, such as the Platform Cooperativism Consortium on an international scale, or the Plateformes en Communs in France. By uniting together, cooperative platforms have managed to attract media attention, but also attention from one of their most symbolic “enemies”. In May 2018, the Platform Cooperativism Consortium announced that it had received a $1 million dollar grant from… the Google Foundation. A grant aimed essentially at supporting the creation of cooperative platforms in developing countries.

Naturally, the announcement created quite a stir in the movement, some people condemning a symbolically unacceptable contradiction, others expressing concern that the model might be appropriated by Google. In any case, this event highlights the lack of support for the movement, pushed into signing agreements which go against its very nature.

It therefore seems essential to the survival of cooperative platforms, and the general existence of alternatives to the platforms which are currently crushing the market, for public institutions and ESS structures to actively support developing projects. For example, through financing measures (especially venture capital), specialized support structures, commercial partnerships, equity participation, or even joint construction of platforms based on local needs. Without political input and innovation in practices, domination by global platforms without sharing seems inevitable.

[divider style=”normal” top=”20″ bottom=”20″]

Mélissa Boudes, Associate Professor of Management, Institut Mines-Télécom Business School, Guillaume Compain, Doctoral student in Sociology, Université Paris Dauphine – PSL and Müge Ozman, Professor of Management, Institut Mines-Télécom Business School ;

This article has been republished from The Conversation under a Creative Commons license. Read the original article (in French).

IA, AI

The unintelligence of artificial intelligence

Despite the significant advances made by artificial intelligence, it still struggles to copy human intelligence. Artificial intelligence remains focused on performing tasks, without understanding the meaning of its actions, and its limitations are therefore evident in the event of a change of context or when the time comes to scale up. Jean-Louis Dessalles outlines these problems in his latest book entitled Des intelligences très artificielles (Very Artificial Intelligence). The Télécom Paris researcher also suggests avenues for investigation into creating truly intelligent artificial intelligence. He presents some of his ideas in the following interview with I’MTech.

 

Can artificial intelligence (AI) understand what it is doing?

Jean-Louis Dessalles: It has happened, yes. It was the case with the SHRDLU program, for example, an invention by Terry Winograd during his thesis at MIT, in 1970. The program simulated a robot that could stack blocks and speak about what it was doing at the same time. It was incredible, because it was able to justify its actions. After making a stack, the researchers could ask it why it had moved the green block, which they had not asked it to move. SHRDLU would reply that it was to make space in order to move the blocks around more easily. This was almost 50 years ago and has remained one of the rare isolated cases of programs capable of understanding their own actions. These days, the majority of AI programs cannot explain what they are doing.

Why is this an isolated case?

JLD: SHRDLU was very good at explaining how it stacked blocks in a virtual world of cubes and pyramids. When the researchers wanted to scale the program up to a more complex world, it was considerably less effective. This type of AI became something which was able to carry out a given task but was unable to understand it. Recently, IBM released Project Debater, an AI program that can debate in speech competitions. It is very impressive, but if we analyze what the program is doing, we realize it understands very little. The program searches the Internet, extracts phrases which are logically linked, and puts them together in an argument. When the audience sees listens, it has the illusion of a logical construction, but it is simply a compilation of phrases from a superficial analysis. The AI program doesn’t understand the meaning of what it says.

IBM’s Project Debater speaking on the statement “We should subsidize preschools”

 

Does it matter whether AI understands, as long as it is effective?

JLD: Systems that don’t understand end up making mistakes that humans wouldn’t make. Automated translation systems are extremely useful, for example. However, they can make mistakes on simple words because they do not understand implicit meaning, when even a child could understand the meaning due to the context. The AI behind these programs is very effective as long as it remains within a given context, like SHRDLU. As soon as you put it into an everyday life situation, when you need it to consider context, it turns out to be limited because it does not understand the meaning of what we are asking it.

Are you saying that artificial intelligence is not intelligent?

JLD: There are two fundamental, opposing visions of AI these days. On the one hand, a primarily American view which focuses on performance, on the other hand, Turing’s view that if an AI program cannot explain what it is doing or interact with me, I will not call it “intelligent”. From a utilitarian point of view, the first vision is successful in many ways, but it comes up against major limitations, especially in problem-solving. Take the example of a connected building or house. AI can make optimal decisions, but if the decisions are incomprehensible to humans, they will consider the AI stupid. We want machines to be able to think sequentially, like us: I want to do this, so I have to change that; and if that creates another problem, I will then change something else. The machine’s multi-criteria optimization sets all the parameters at the same time, which is incomprehensible to us. It will certainly be effective, but ultimately the human will be the one judging whether the decision made is appropriate or not, according to their values and preferences, including their will to understand the decision.

Why can’t a machine understand the meaning of the actions we ask of it?

JLD: Most of today’s AI programs are based on digital techniques, which do not incorporate the issue of representation. If I have a problem, I set the parameters and variables, and the neural network gives me a result based on a calculation I cannot understand. There is no way of incorporating concepts or meaning. There is also work being done on ontologies. Meaning is represented in the form of preconceived structures where everything is explicit: a particular idea or concept will be paired with a linguistic entity. For example, to give a machine the meaning of the word “marriage”, we will associate it with a conceptual description based on a link between a person A and a person B, and the machine will discover for itself that there is a geographical proximity between these two people, they live in the same place, etc. Personally, I don’t believe that ontologies will bring us closer to an AI which understands what it is doing, and thus one that is truly intelligent under Turing’s definition.

What do you think is the limitation of ontologies?

JLD: They too have difficulty being scaled up. For the example of marriage, the challenge lies in giving the machine the full range of meaning that humans attribute to this concept. Depending on an individual’s values and beliefs, their idea of marriage will differ. Making AI understand this requires constructing representations that are complex, sometimes too complex. Humans understand a concept and its subtleties very quickly, with very little initial description. Nobody spends hours on end teaching a child what a cat is. The child does it alone by observing just a few cats and finding the common point between them. For this, we use special cognitive mechanisms including looking for simplicity, which enables us to reconstruct the missing part of a half-hidden object, or to understand the meaning of a word which can have several different meanings.

What does AI lack in order to be truly intelligent and acquire this implicit knowledge?

JLD: Self-observation requires contrast, which is something AI lacks. The meaning of words changes with time and depending on the situation. If I say to you: “put this in the closet”, you will know which piece of furniture to turn to, even though the closet in your office and the one in your bedroom do not look alike, neither in their shape or in what they contain. This is what allows us to understand very vague concepts like the word “big”. I can talk about “big bacteria” or a “big galaxy” and you will understand me, because you know that the word “big” does not have an absolute meaning. It is based on a contrast between the designated object and the typical corresponding object, depending on the context. Machines do not yet know how to do this. They would recognize the word “big” as a characteristic of the galaxy, but using “big” to describe bacteria would make no sense to it, for example. They need to be able to make contrasts.

Is this feasible?

JLD: Quite likely, yes. But we would have to augment digital techniques to do so. AI designers are light years away from being able to address this type of question. What they want to figure out, is how to improve the performance of their multi-layer neural networks. They do not see the point of striving towards human intelligence. IBM’s Project Debater is a perfect illustration of this: it is above all about classification, with no ability to make contrasts. On the face of it, it is very impressive, but it is not as powerful as human intelligence, with its cognitive mechanisms for extrapolating and making arguments. The IBM program contrasts phrases according to the words they contain, while we contrast them based on the ideas they express. In order to be truly intelligent, AI will need to go beyond simple classification and attempt to reproduce, instead of mime, our cognitive mechanisms.