smart home

Smart homes: A world of conflict and collaboration

The progress made in decentralized artificial intelligence means that we can now imagine what our future homes will be like. The services offered by a smart home to its users are likely to be modeled on appliances which communicate and cooperate with each other autonomously. Today, this approach is considered the best way to control the dynamic, data-rich household environment. Olivier Boissier and Gauthier Picard, researchers in AI at Mines Saint-Étienne, are currently working on the technology. In this interview for I’MTech, they explain the interest in the decentralized approach to AI as well as how it works, through concrete examples of how it is used in the home.

This article is part of our dossier “Far from fantasy: the AI technologies which really affect us.

Can we think of smart homes as a simple network of connected objects?

Gauthier Picard: A smart home is made up of an assortment of fairly different objects. This is very different from industrial networks of sensors, in which devices are designed to have similar memory capacities and identical structures. In a house, we cannot put the same calculating capacity in a light bulb as in an oven, for example. If the occupant expects a varied number of operating scenarios with the objects coordinating together, it means that we must be able to take the objects’ differences into account.  A smart home is also a very dynamic environment. You must be able to add things such as an intelligent light bulb, or a Raspberry-type nanocomputer to control the blinds when you want to, without hindering the performance for the user.

So, how do you make a house ‘smart’ despite all this complexity?  

Olivier Boissier: We use what we call a multi-agent approach. This is central to our discipline of decentralized artificial intelligence. We use the term ‘decentralized’ instead of ‘distributed’ to really highlight that to make a house ‘smart’, we need to do more than just distribute knowledge between the different devices. The decision also needs to be decentralized.  We use the term agent to describe an object, service which will manage several objects or a service which will itself manage several services. Our aim is to make these agents organize themselves via rules which allow them to exchange information in the best way possible. But not all household objects will become agents because, as Gautier said, some objects don’t have sufficient calculating capacity and are unable to organize themselves.  Therefore, one of the biggest questions that we ask ourselves is whether an object should remain a simple object which perceives or executes things, such as a sensor or a small LED, and which objects will become agents.

Can you show how his approach works with a concrete example of how it’s used in a smart home?

GP: If we again use light bulbs and light as an example, we can imagine a user asking for the light level in their smart home to fall by 40% if they’re in their living room after 9pm. The user doesn’t care which object decides or acts to carry out the request, what interests them is having less light. It’s up to the global system to optimize the decisions by deciding which light bulb to turn off or whether the TV also needs to be turned off as it emits light even when it’s not being used, or whether it can leave the blinds open because it’s still daylight outside.  All of these decisions need to be made in a collective manner, potentially with constraints set by the occupant who might want to lower the electricity bill, for example. A centralized entity will not manage all of these decisions, instead, each element will react depending on what the other elements do.  If it is summer, and therefore still light outside, does the house need more lights on? If it does, then the agents will first turn on the bulbs which consume the least energy and emit the most light.  If this is not enough, other agents will turn on other light bulbs.

You said that the decision was not centralized.  Why don’t you just have one decision-making device which manages all the objects?  

GP: The problem with a centralized solution, is that it all depends on a single device. With this approach, it is very likely that all information will be stored on the cloud.  If the network is faulty, or if there is too much activity, then the network’s performance will be affected. The advantage of the multi-agent approach is that it also has data which is close to where the decision is being made.  Since everything is done locally, it will take less time for decisions to be made and the network will have better security. Therefore, the system is more resilient and can respond better to the privacy requirements of the users.

OB: But we are not saying that centralized solutions are necessarily worse. The multi-agent approach requires efforts to coordinate the objects and services.  It should be preferred in complex environments, where it is necessary to have data close to where the decision is being made.  If a centralized management algorithm works for a precise and simple action, then that’s fine. The multi-agent approach becomes interesting when there are large quantities of data which need to be processed quickly. This is the case when a smart home includes several users with multiple, sometimes conflicting, functions.

How can functions become conflicting?

OB: In the case of lighting, a conflicting situation would be if two users in the same room have different preferences. The same agents are asked to carry out two incompatible decision-making processes. This situation can be simplified to a conflict of resources.  Conflicts like this have a high chance of occurring because we are in a dynamic environment.  The agents make action plans to respond to the user’s demands but if another user enters in the room, the plan will be disrupted.  Therefore, conflicts can’t always be predicted in advance; they often only appear when the plan is being executed. In certain cases, simple rules mean that the problem can be resolved quickly. This happens when priority functions such as emergency assistance or the security of the building will take precedence over entertainment functions.  In other cases, ways to resolve conflicts between agents must be created.

GP: Negotiation is a good example of a technique which solves this problem. Because the conflict is a fight over a resource, each agent can coordinate a bid for the functions that it wishes to use. If it wins the bid, it accumulates a debt which prevents it from winning the next one. Over time, the agents regulate themselves.  By adopting an economic approach between agents, we can also try to find the Nash equilibrium. This means that each agent will maximize its output depending on what the rest of the agents want to do.

How do you make all of these interactions possible between agents?  

OB: There are several ways that agents can self-organize.  It can be done through stigmergy, whereby the agents don’t communicate with each other; they simply act in response to what is happening around them. This can also be in response to information that is placed in their environment by other agents, which allows them to respond to the user’s request. Another method is introducing a global behavior policy for all the agents, such as privacy, and leaving the agents to interpret it in a collective manner.  In this case, the user simply gives their preference on what they want to remain confidential and the agents communicate the information accordingly.  We try to combine these approaches by adding more coordination protocols, such as the conflict management rules which were mentioned above.

GP: All the agents have access to a definition of their environment. They know the rules and the roles that they can play, and they adapt to this environment.  It’s a bit like when you learn the Highway Code so that you know how to act when you approach a crossroads. You know what can happen and what other motorists are supposed to do.  If you find yourself in a situation which does not follow the usual rules, for example because there is a traffic jam, or an accident has happened right in the middle of the crossroad, you adapt the rules.  Agents should be able to do the same thing. They should react and change the system so that they can organize themselves and respond to the user’s demands.

In regard to this general multi-agent approach for smart houses, what can we already do and what still remains a research question?

OB: Currently, there are a lot of studies on subjects that provide effective solutions in theory for the problems that we have raised. We know how to build protocols which satisfy the organization functions, we know how to configure behavioral policies amongst agents. However, there is still a lot of work to be done to move past theory and into practice. When we have a concrete case of a smart house with large amounts of information arriving at any time, the system must be able to process that data. From a practical point of view, we also need to answer fundamental questions about what a smart home should be for the user. Should they have control over absolutely everything or can we leave the decisions to agents without user control?

Is it realistic to consider the control being taken away from the occupant?  

GP: We have to understand that we aren’t dealing here with neural networks which make decisions like black boxes.  In the case of the multi-agent approach, there is a history of the decisions of the agent, with the plan that it puts in place to reach that decision, and the reasons for creating the plan.  So even if the decision is left to the agent, that doesn’t mean that the user won’t know how it came about. There is still a control mechanism, and the user can change their preferences if they need to. It’s not as if the agent decides without the user having any opportunity to know what it is doing.

OB: It’s an AI approach which is different to what people imagine artificial intelligence being. It is not yet as well known as the learning approach. Decentralized AI is still difficult for the general public to understand but there are now more and more uses for the technology, which means that it’s becoming increasingly necessary. 20 years ago, systems often had a centralized solution.  Today, notably with the development of the IoT (Internet of Things), decentralization is an obligation and decentralized AI is recognized as being the most logical solution for uses such as smart homes or Smart Cities.

large retailers

AI lends a hand to help large retailers win back their customers

Large retailers are in search of tools to help them improve the buying experience in their stores and compete more effectively with online shopping. From intelligent guidance for customers in overcrowded supermarkets to optimized selection of the products on the shelves, researchers Marin Lujak and Arnaud Doniec from IMT Lille Douai and Jacky Montmain from IMT Mines Alès are using artificial intelligence to offer a customized experience to clients.

This article is part of our dossier “Far from fantasy: the AI technologies which really affect us.

It’s late Saturday morning and Mrs. Little enters her usual supermarket, eyes fixed on her watch. In front of her, the aisles are overrun with shopping carts overflowing with all different types of products.  She plunges into the crowd, weaving her way between the shoppers and dodging the promotional displays which block the middle of the aisles. Somehow, she manages to pick up two packs of water before fighting her way back to the other end of the store to get some dog food. As her cart becomes heavier, it becomes more difficult to maneuver. She leaves it at the end of the aisle and then wanders around the store, shopping list in hand, in search of cream, but without success.

Mrs. Little’s story is the story of every shopper who endures the ordeal of the supermarket every week. Today, consumers want to save time when they are shopping. The large retail industry is increasingly in competition with online businesses, and stores are focusing all their efforts on keeping their customers. What if one of the solutions was an intelligent consumer guidance system? “We could create an app which would automatically calculate the best possible route around the store, based on a customer’s shopping list, to reduce the amount of time that someone has to spend shopping,” suggest Marin Lujak and Arnaud Doniec, experts in artificial intelligence at IMT Lille-Douai.

Optimizing the route in a crowded supermarket

To provide guidance to customers in real time, the two researchers have devised a multi-agent system. This approach consists in developing distributed artificial intelligence which is made up of several small intelligent devices, called agents, that are distributed throughout the store and interact with each other. A collective intelligence then emerges from the sum of all these interactions.

In this architecture, fixed agents, in the form of proximity sensors or cameras that are linked to the store network, evaluate the density of customers per square meter in the aisles. Other agents installed on the customers’ smart phones use the client’s shopping list, the location of the items, and the current congestion levels to calculate the itinerary for the shortest overall journey around the store. If an aisle is congested, information is sent to the app, which then guides the customer towards another part of the store and makes them return to that aisle later. At the moment, this model is purely theoretical and must be developed to be applied to real cases. For example, product-related constraints could be added: starting with the heaviest or bulkiest products, for example, and finishing with the frozen food, etc.

Large retailers are showing a growing interest in the buying experience of their clients. However, even though this solution could improve the buying experience of a customer who is in a rush, it would also mean that the client would spend less time in the store. So, why would a store manager want to invest in this tool? For Marin Lujak and Arnaud Doniec, it’s a question of balance. “Each company can take ownership of the tool and include things such as alerts for promotional offers etc. We can also imagine that companies will be able to make the most of the app by guiding the client towards certain aisles according to their centers of interest.”

Proposing the right product to the right client

Spending less time in a store is good, but it’s even better if the customer finds all the products that they need. Another way to keep consumers is to get to know and adapt to their needs. Since 2010, a researcher at IMT Mines Alès, Jacky Montmain, has been collaborating with the company TRF Retail to develop supervision and diagnostic tools for product performance. “The tool is interesting for large retailers as it allows them to track the performance of a product, or a family of products, in an entire network of stores. It allows them to make comparisons and understand where a product is being sold or not, and why,” explains Montmain. The store manager is then free to adjust the range of products that they offer on their shelves according to this data.

When shops look at their clients, they look at the revenue they bring in before anything else. But how can you identify and distinguish what people are buying in an intelligent manner when a supermarket has between 100,000 and 150,000 products on sale? Jacky Montmain and PhD student Jocelyn Poncelet answer this question by establishing a product classification system. This tree structure classification is made up of five levels: the product, family of products, department, sub-category and category. For example, a fruit yogurt is part of the yogurt family in the fresh products department and the sub-category of dairy products, which itself is in the category of food. “By providing this intelligent breakdown, we can determine the consumption habits of customers and then compare them with each other. If we settle for comparing the customer till receipts, then a fruit yogurt will be as different from a natural yogurt as it is from a laundry detergent,” explains Jocelyn Poncelet.

Customer segmentation has proven its worth thanks to a field experiment conducted in 2 stores (a small shop and another which specializes in DIY). In time, the product classification, and therefore the algorithm for identifying buying habits, should be integrated into the product performance evaluation tool mentioned above. The aim is better stock management for large retailers, and optimized sourcing from suppliers. Finally, this classification will help to strike a balance between the best range of products offered by each store and the real needs of the customers who come there.

Article written for I’MTech by Anaïs Culot.

 

robot

Human-robot collaboration: Industrial utopia or tomorrow’s reality?

In the factories of the future, robots will not replace humans, but instead assist them. Researchers Sotiris Manitsaris from Mines ParisTech and Patrick Hénaff from Mines Nancy, are currently working on a control system design based on artificial intelligence, which can be used by all types of robot. But what is the aim of this type of AI? This technology aims to identify human actions and adapt to the pace of machine operators in an industrial context. Above all, knowledge about humans and their movement is the key to successful collaboration with machines.

This article is part of our dossier “Far from fantasy: the AI technologies which really affect us.

A robotic arm scrubs the bottom of a tank in perfect synchronization with the human hand next to it.  They’re moving at the same pace; a rhythm which is dictated by the way the operator moves.  Sometimes fast, then slightly slower, this harmony of movement is achieved through the artificial intelligence that is installed in the anthropomorphic robot. In the neighboring production area, a self-driving vehicle dances around the factory. It dodges every obstacle in its way, until it delivers the parts that it’s transporting to a manager on the production line.  In impeccable timing, the human operator retrieves the parts and then, once they have finished assembling them, leaves them on the transport tray of the small vehicle, which sets off immediately. During its journey, the machine passes several production areas, where humans and robots carry out their jobs “hand-in-hand”.

Even though anthropomorphic robots like robotic arms or small self-driving vehicles are already being used in some factories, they are not yet capable of collaborating with humans in this way.  Currently, robot manufacturers pre-integrate sensors and algorithms into the device. However, their future interactions with humans are not considered during their development.  “At the moment, there are a lot of situations where human operators and robots work side-by-side in factories, but don’t interact a lot together. This is because robots don’t understand humans when they’re in contact with them,” explains Sotiris Manitsaris, a specialized researcher in collaborative robotics at Mines ParisTech.

Human-robot collaboration, or cobotization, is an emerging field in robotics which redefines the function of robots as working “with” and not “instead of” humans. By neutralizing human and robotic weaknesses with the assets of the other, this approach allows factory productivity to increase, whilst still retaining jobs.  The human workers bring flexibility, dexterity and decision making, whilst the robots bring efficiency, speed and precision.  But to be able to collaborate properly, robots have to be flexible, interactive and, above all, intelligent.  “Robotics is the tangible aspect of artificial intelligence. It allows AI to act on the outside world with a constant perception-action loop. Without this, the robot would not be able to operate,” says Patrick Hénaff, specialist in bio-inspired artificial intelligence at Mines Nancy. From the automotive to the fashion and luxury goods industries, all sectors are interested in integrating robotic collaboration.

Towards a Successful Partnership Focused on Human Action.

Beyond direct interaction between humans and machines, the entire production cycle could become more flexible. This would depend more on the operator’s pace and the way that they work. “The robot has to respond to the needs of humans but also anticipate their behavior. This allows them to adapt dynamically,” explains Sotiris Manitsaris. For example, on an assembly line in the automotive industry, each task is carried out in a specific time.  If the robot anticipates the operator’s movements, then it can also adapt to their speed.  This issue has been the focus of work with PSA Peugeot Citroën as part of the chair in Robotics and Virtual Reality at Mines ParisTech. So far, researchers have been able to put in place the first promising human-robot collaborations. In this collaboration, which took place on a work bench, a robot brought parts depending on the execution speed of the operator. The operator then assembled them and screwed them together, before giving them back to the robot.

Read on I’MTech: The future of production systems, between customization and sustainable development

Another aim of cobotics is to alleviate human operators of difficult tasks. As part of the Horizon 2020 Framework launched at the end of 2018, Sotiris Manitsaris has tackled the development of ergonomic gesture recognition technologies and the communication of this information to robots.  To do this, first of all, the gestures are recorded with the help of communicating objects (smart watch, smart phone, etc.) which the operator wears.  The gestures are then learned by artificial intelligence. These new models of collaboration, which are centered around humans and their actions, are conceptualized so they can be implemented on any robotic model. From now on, once the movement is recognized, the question is knowing what information to communicate to the robot. This is so it can adapt its behavior without affecting its own performance, nor the performance of the human collaborator.

Rhythmic Collaboration

Understanding movements and implementing them in robots is also central to the work conducted by Patrick Hénaff.  His latest work uses an approach inspired by neurobiology and is based on knowledge of animal motor systems. “We can consider artificial intelligence as being made up of a high-level structure, the brain, and of lower-level intelligence which can be dedicated to movement control without needing to receive higher-level information permanently,” Hénaff explains. More particularly, this research deals with rhythmic gestures, in other words, with automatic movements which are not ordered by our brain. Instead, these gestures are commanded by our neural networks, located in our spinal cord.  For example, in instances such as walking, or wiping a surface with a sponge.

Once the action is initiated by the brain, a rhythmic movement occurs naturally and at a pace which is dictated by our morphology. However, it has been demonstrated that for some of these gestures, the human body is able to synchronize naturally with external (visual or aural) signals which are equally rhythmic.  For example, this happens when two people walk together. “In our algorithms, we try to determine which external signals we need to integrate into our equations. This is so that a machine synchronizes with either humans or its environment when it carries out a rhythmic gesture,” describes Patrick Hénaff.

From the Laboratory to the Factory: Only One Step.

In the laboratory, researchers have demonstrated that robots can carry out rhythmic tasks without physical contact.  With the help of a camera, a robot observes the hand gestures of a person who is saying hello and can then reproduce it at the same pace and synchronize itself with the gesture. The experiments were also carried out on an interaction with contact – a handshake.  The robot learns the way to hold a human hand and synchronizes its movement with the person opposite them.

In an industrial setting, an operator carries out numerous rhythmic gestures, such as sawing a pipe, scrubbing the bottom of a tank, or even polishing a surface. To carry out tasks in cooperation with an operator, the robot has to be able to reproduce its movements.  For example, if a robot saws a pipe with a human, then the machine must adapt its rhythm so that it does not cause musculoskeletal disorders.  “We have just launched a partnership with a factory in order to carry out a proof of concept. This will demonstrate that new generation robots can carry out, in a professional environment, a rhythmic task which doesn’t need a precise trajectory but in which the final result is correct,” describes Patrick Hénaff. Now, researchers want to tackle dangerous environments and the most arduous tasks for operators, not with the aim of replacing them, but helping them work “hand-in-hand”.

Article written for I’MTech by Anaïs Culot

infrared

MAGIC: the wonders of infrared camouflage

The MAGIC project aims to develop a camouflage technique against infrared cameras. Mines Saint-Etienne is using its expertise in the optical properties of materials to achieve the project’s objective. Funded by the DGA and supported by the ANR, MAGIC primarily focusses on military applications. Jenny Faucheu, a researcher on the project at Mines Saint-Étienne, explains the scientific approach used.

 

Infrared detection is particularly known for its application in thermal cameras. How do these cameras work?

Jenny Faucheu: They are based on thermography, which is used in thermal diagnosis, for example. The technique produces colorful images that indicate thermal radiation. The principle of cameras that produce this kind of image is based on the capture of distant infrared wavelengths: these are wavelengths of light that are greater than those of visible light, and correspond to the electromagnetic radiation of an object whose temperature is in the region of ten to several hundred degrees. The image displayed reflects the quantities of these wavelengths.

The ANR MAGIC project aims to develop a camouflage technique against this type of detection. What is this exactly?

JF: We use a material based on vanadium dioxide. It has thermochromic properties, meaning that its ability to emit infrared rays will change according to the temperature. More precisely, we use a polymorph of this vanadium oxide – a particular crystalline form. When heated to above 70°C, its crystalline form changes and the material passes from 80% energy radiation to 40%, making it appear colder than it actually is on thermal cameras. 40% radiation from an object at 75°C will still correspond to less radiation than 80% of an object at 65°C. This is one of the two camouflage properties we aim to develop.

What is the other camouflage property you are working on?

JF: Thermographic cameras that produce multicolor images are not the only cameras based on infrared emission. The other detection mechanism is the one used by cameras that produce grayscale night images. These cameras amplify near-visible infrared wavelengths and display them in white in the image. Things that emit no infrared radiation are displayed in black. If there is not enough energy to amplify on the image, the camera emits a beam and records what is reflected back to it, a bit like a sonar. In this case, even if the vanadium oxide material emits less radiation, it will still be detected because it will reflect the camera beam.

How can you ensure discretion faced with this second type of camera?

JF: We need to work on the surface texture of the materials and their structure. The approach we use consists in laser texturing the vanadium oxide material. We shape the surface to disperse the infrared rays emitted by the camera in different directions. To do this, we are working with Manutech-USD, which has a laser texturing platform capable of working on large and complex parts. Since the beam is not reflected back towards the camera, it is as though it had passed straight through the object. As far as the camera is concerned, if it receives no reflection there is nothing in front of it. Objects that should be displayed in white in the image without camouflage will instead be displayed in black.

What applications do you foresee for this work?

JF: MAGIC is a response to the ASTRID call, whose projects are funded by the Directorate General for Armaments (DGA). The planned applications are therefore essentially military. We are working with Hexadrone to build a surveillance drone like those found in stores… a stealthy one. We also want to show that it is possible to reduce the thermal signature of engines and infantrymen. By adding a few tungsten atoms to the vanadium oxide material, the temperature for crystalline form change can be decreased from about 70°C to about 35°C. This is very practical for potential human applications. A normally dressed person would appear at 37°C on a camera, but a suit made of this special material could make them undetectable by making them appear much colder.

 

KM3Net

KM3NeT: Searching the Depths of the Sea for Elusive Neutrinos

The sun alone produces more than 64 billion neutrinos per second and per cm2 that pass right through the Earth. These elementary particles of matter are everywhere, yet they remain almost entirely elusive. The key word is almost… The European infrastructure KM3NeT, currently being installed in the depths of the Mediterranean Sea, has been designed to detect the extremely faint light generated by neutrino interactions in the water. Researcher Richard Dallier from IMT Atlantique offers insight on the major scientific and technical challenge of searching for neutrinos. 

 

These “little neutral particles” are among the most mysterious in the universe. “Neutrinos have no electric charge, very low mass and move at a speed close to that of light. They are hard to study because they are extremely difficult to detect,” explains Richard Dallier, member of the KM3NeT team from the Neutrino group at Subatech laboratory[1]. “They interact so little with matter that only one particle out of 100 billion encounters an atom!”

Although their existence was first postulated in the 1930s by physicist Wolfgang Pauli, it was not confirmed experimentally until 1956 by American physicists Frederick Reines and Clyde Cowan–awarded the Nobel Prize in Physics for this discovery in 1995. This was a small revolution for particle physics. “It could justify the excess matter that enabled our existence. The Big Bang created as much matter as it did antimatter, but they mutually annihilated each other very quickly. So, there should not be any left! We hope that studying neutrinos will help us understand this imbalance,” Richard Dallier explains.

The Neutrino Saga

While there is still much to discover about these bashful particles, we do know that neutrinos exist in three forms or “flavors”: the electron neutrino, the muon neutrino and the tau neutrino. The neutrino is certainly an unusual particle, capable of transforming over the course of its journey. This phenomenon is called oscillation: “The neutrino, which can be generated from different sources, including the Sun, nuclear power plants and cosmic rays, is born as a certain type, takes on a hybrid form combining all three flavors as it travels and can then appear as a different flavor when it is detected,” Richard Dallier explains.

The oscillation of neutrinos was first revealed in 1998 with the Super-Kamiokande experiment, a Japanese neutrino observatory, which also received the Nobel Prize in Physics in 2015. This change in identity is key: it provides indirect evidence that neutrinos indeed have a mass, albeit extremely low.  However, another mystery remains: what is the mass hierarchy of these 3 flavors? The answer to this question would further clarify our understanding of the Standard Model of particle physics.

The singularity of neutrinos is a fascinating area of study. An increasing number of observatories and detectors dedicated to the subject are being installed in great depths, where the combination of darkness and concentration of matter is ideal. Russia has installed a detector at the bottom of Lake Baikal and the United States in the South Pole. Europe, on the other hand, is working in the depths of the Mediterranean Sea. This phenomenon of fishing for neutrinos first began in 2008 with the Antares experiment, a unique type of telescope that can detect even the faintest light crossing the depths of the sea. Antares then made way for KM3NeT, with improved sensitivity to orders of magnitude. This experiment has brought together nearly 250 researchers from around 50 laboratories and institutes, including four French laboratories. In addition to studying the fundamental properties of neutrinos, the collaboration aims to discover and study the astrophysical sources of cosmic neutrinos.

Staring into the Universe

KM3NeT is actually comprised of two gigantic neutrino telescopes currently being installed at the bottom of the Mediterranean Sea. The first, called ORCA (Oscillation Research with Cosmics in the Abyss), is located off the coast of Toulon in France. Submerged at a depth of nearly 2,500 meters, it will eventually be composed of 115 strings attached to the seabed. “Optical detectors are placed on each of these 200-meter flexible strings, spaced 20 meters apart: 18 spheres measuring 45 centimeters spaced 9 meters apart each contain 31 light sensors,” explains Richard Dallier, who is participating in the construction and installation of these modules. “This unprecedented density of detectors is required in order to study the properties of the neutrinos: their nature, oscillations and thus their masses and classification thereof. The sources of neutrinos ORCA will focus on are the Sun and the terrestrial atmosphere, where they are generated in large numbers by the cosmic rays that bombard the Earth.”

KM3Net

Each of KM3Net’s optical modules is comprised of 31 photomultipliers to detect the light produced by interactions between neutrinos and matter. These spheres with a diameter of 47 centimeters (including a covering of nearly 2 cm!) were designed to withstand pressures of 350 bar.

The second KM3Net telescope is ARCA (Astroparticles Research with Cosmics in the Abyss). It will be located 3,500 meters under the sea in Sicily. There will be twice as many strings, which will be longer (700 meters) and spaced further apart (90 meters), but with the same number of sensors. With a volume of over one km3—hence the name KM3Net for km3 Neutrino Telescope—ARCA will be dedicated to searching for and observing the astrophysical sources of neutrinos, which are much rarer. A total of over 6,000 optical modules containing over 200,000 light sensors will be installed by 2022. These numbers make KM3Net the largest detector in the world, in equal position with its cousin, IceCube, in Antarctica.

Both ORCA and ARCA operate on the same principle based on the indirect detection of neutrinos. When a neutrino encounters an atom of matter, such as an atom of air, water, or the Earth itself—since they easily travel right through it—the neutrino can “deposit” its energy there. This energy is then instantly transformed into one of the three particles of the flavor of the neutrino: an electron, a muon or a tau. This “daughter” particle then continues its journey on the same path as the initial neutrino and at the same speed, emitting light in the atmosphere it is passing through, or interacting itself with atoms in the environment and disintegrating into other particles, which will also radiate as blue light.

Since this is all happening at the speed of light, an extremely short light pulse of a few nanoseconds occurs. If the environment the neutrino is passing through is transparent–which is the case for the water in the Mediterranean Sea–and the path goes through the volume occupied by ORCA or ARCA, the light sensors will detect this extremely faint flash,” Richard Dallier explains. Therefore, if several sensors are touched, we can reconstitute every direction of the trajectory and determine the energy and nature of the original neutrino. But regardless of the source, the probability of neutrino interactions remains extremely low: with a volume of 1 km3, ARCA only expects to detect a few neutrinos originating from the universe.

Neutrinos: New Messengers Revealing a Violent Universe

Seen as cosmic messengers, these phantom particles open a window onto a violent universe. “Among other things, the study of neutrinos will provide a better understanding and knowledge of cosmic cataclysms,” says Richard Dallier. The collisions of black holes and neutron stars, supernovae, and even massive stars that collapse, produce gusts of neutrinos that bombard us without being absorbed or deflected in their journey. This means that light is no longer the only messenger of the objects in the universe.

Neutrinos have therefore strengthened the arsenal of “multi-messenger” astronomy, involving the cooperation of a maximum number of observatories and instruments throughout the world. Each wavelength and particle contributes to the study of various processes and additional aspects of astrophysical objects and phenomena. “The more observers and objects observed, the greater the chances of finding something,” Richard Dallier explains. And in these extraterrestrial particles lies the possibility of tracing our own origins with greater precision.

[1] SUBATECH is a research laboratory co-operated by IMT Atlantique, the Institut National de Physique Nucléaire et de Physique des Particules (IN2P3) of CNRS, and the Université de Nantes.

Article written for I’MTech (in French) by Anne-Sophie Boutaud

maladie chronique, chronic disease

Chronic disease: what does the Internet really change in patients’ lives?

For the first time, a study has assessed the impact of digital technology on the lives of patients with chronic diseases. It was conducted by the ICA patient association collective, in partnership with researchers from the Smart Objects and Social Networks chair at Institut Mines-Télécom Business School. The study provides a portrait of the benefits and limitations perceived by chronically ill people for three technologies: the Internet, mobile applications and smart objects. Multiple factors were evaluated, such as the quality of the relationship with the physician, the degree of expertise, the patient’s level of incapacitation and their quality of life.

 

Internet research has become an automatic reflex to learn about any disease. From the common cold to the rarest diseases, patients find information about their cases through more or less specialized sites. Scientific publications have already shown that social networks and health forums are especially used by patients when they are diagnosed. However, the true usefulness of the Internet, apps or smart objects for patients remains unclear. To gain a better understanding of how new technology helps patients, the Impatients, Chroniques & Associés collective (ICA) contacted the Smart Objects and Social Networks Chair at Institut Mines-Télécom Business School. The study, conducted between February and July 2018, focused on people living with chronic disease and their use of digital technology. The results were presented on February 20, 2019 at the Cité des Sciences et de l’Industrie in Paris.

More than 1,013 patients completed the questionnaire designed by the researchers. The data collected on technology usage shows that, overall, patients are not very attracted by smart objects. 71.8% of respondents reported that they used the Internet only, 1 to 3 times a month. 19.3% said they used both the Internet and mobile applications on a weekly basis. Only 8.9% use smart objects in addition to the Internet and apps.

Read on I’MTech Healthcare: what makes some connected objects a success and others a flop?

The study therefore shows that uses are very different and that a certain proportion of patients are characterized by the “multi-technology” category. However, “contrary to what we might think, the third group comprising the most connected respondents is not necessarily made up of the youngest people,” indicates Christine Balagué, holder of the Smart Objects and Social Networks chair. In the 25-34 age group, the study found “almost no difference between the three technology use groups (20% of each use group is in this age group)“. The desire for digital health solutions is therefore not a generational issue.

Digital technology: a relative benefit for patients?

The specificity of the study is that it cross-references the use of digital technology (Internet, mobile applications and smart objects) with standard variables in publications that characterize patients’ behavior towards their health. This comparison revealed a new result: the patients who use technology the most are on average no more knowledgeable about their disease than patients who are not very connected. They are also no more efficient in their ability to adopt preventive behavior related to their disease.

On the other hand, the more connected patients are, the greater their ability is to take action in the management of their disease,” says Christine Balagué. Patients in the most connected category believe they are better able to make preventive decisions and to reassure themselves about their condition. However, technology has little impact on the patient-doctor relationship. “The benefit is relative: there is a difference between the benefit perceived by the patient and the reality of what digital tools provide,” concludes Christine Balagué.

Some of the criteria measured by the researchers nevertheless show a correlation with the degree of use of technology and the use of several technology devices. This is the case, for example, with patient empowerment. Notably, the most connected patients are also those who most frequently take the initiative to ask their doctor for information or give their opinion about treatment. These patients also report being most involved by the doctor in medical care. On this point, the study concludes that:

“The use of technology[…] does not change the self-perception of chronically ill patients, who all feel equally knowledgeable about their disease regardless of their use of digital technology. On the other hand, access to this information may subtly change their position in their interactions with the medical and nursing teams, leading to a more positive perception of their ability to play a role in decisions concerning their health.”

The flip side of the coin

Although information found on the Internet offers genuine benefits in the relationship with the medical profession, the use of technologies also has some negative effects, according to patient feedback. 45% believe that the use of digital technology has negative emotional consequences. “Patients find that the Internet reminds them of the disease on a daily basis, and that this increases stress and anxiety,” says Christine Balagué. This result may be linked to the type of use among the chronically ill. The vast majority of them generally search for stories from other people with similar pathologies, which frequently exposes them to the experiences of other patients and their relatives.

Personal stories are considered the most reliable source of information by patients, ahead of content provided by health professionals and patient associations, a fact due to the large, and unequal, amount of information available. Three quarters of respondents indicated that it is difficult to identify and choose reliable information. This sense of mistrust is underlined by other data collected by the researchers during the questionnaire: “71% believe that the Internet is likely to induce self-diagnosis errors.” In addition, a certain proportion of patients (48%) also express mistrust of the privacy of certain mobile sites and applications. This point highlights the challenge for applications and websites to improve the transparency of the use of personal data and respect for privacy, in order to gain their trust.

Read on I’MTech Ethical algorithms in health: a technological and societal challenge

The future development of dedicated web services and patient usage is an issue that researchers want to address. “We want to continue this work of collecting experiences to evaluate changes in use over time,” says Christine Balagué. The continuation of this work will also integrate other developing uses, such as telemedicine and its impact on patients’ quality of life. Finally, the researchers are also considering taking an interest in the other side: the doctors’ side. How do practitioners use digital technologies in their practice? What are the benefits in the relationship with the patient? By combining the results from patient and physician studies, the aim will be to obtain the most accurate portrait possible of patient-physician relationships and of treatment processes in the era of hyperconnectivity.

 

 

cyber sovereignty

What is cyber sovereignty?

Sovereignty is a concept that is historically linked to the idea of a physical territory, whereas the digital world is profoundly dematerialized and virtual. So what does the notion of cyber sovereignty mean? It combines the economic strength of online platforms, digital technologies and regulations based on new societal values. Francis Jutand, Deputy CEO of IMT and member of the Scientific Council of the Institut de la Souveraineté Numérique (Institute of Cyber Sovereignty), presents his view on the foundations of this concept.

 

What does it mean to be “sovereign”?

Francis Jutand: The notion of sovereignty can apply to individuals, companies or nations. To be sovereign is to be able to choose. This means being able to both understand and act. Sovereignty is therefore based on a number of components for taking action: technological development, economic and financial autonomy (and therefore power), and the ability to influence regulatory mechanisms. In addition to these three conditions, there is security, in the sense that being sovereign also means being in a space where you can protect yourself from the potential hostility of others. The fifth and final parameter of sovereignty for large geographical areas, such as nations or economic spaces, is the people’s ability to make their voices heard.

How does this notion of sovereignty apply in the case of digital technology?

FJ: The five components of the ability to act transpose naturally into this field. Being sovereign in a digital world means having our own technology and being independent from major economic players in the sector, such as Google, and their huge financial capacity. It also means developing specific regulations on digital technology and being able to protect against cyber-attacks. As far as the general public is concerned, sovereignty consists in training citizens to understand and use digital technology in an informed way. Based on these criteria, three main zones of cyber sovereignty can be defined around three geographical regions: the United States, Europe and China.

What makes these zones of sovereignty so distinct?

FJ: The American zone is based on economic superpowers and powerful national policy on security and technology operated by government agencies. On the other hand, the state of their regulation in the cyber field is relatively weak. China relies on an omnipresent state with strict regulation and major investments. After its scientific and industrial backwardness in this area, China has caught up over the past few years. Lastly, Europe has good technological skills in both industry and academia, but is not in a leading position. In its favor, the region of European sovereignty has strong market power and pioneering regulations based on certain values, such as the protection of personal data. Its biggest weakness is its lack of economic leadership that could lead to the existence of global digital players.

How is the concept of sovereignty embodied in concrete terms in Europe?

FJ: Europe and its member countries are already investing at a high level in the digital field, through the European Framework Programmes, as well as national programs and ongoing academic research. On the other hand, the small number of world-class companies in this field weakens the potential for research and fruitful collaborations between the academic and industrial worlds. The European Data Protection Board, which is composed of the national data protection authorities of the European Union member states, is another illustration of sovereignty work in the European zone. However, from the point of view of regulations concerning competition law and financial regulation, Europe is still lagging behind in the development of laws and is unassertive in their interpretation. This makes it vulnerable to lobbies as shown by the debates on the European directive on copyright.

How does the notion of cyber sovereignty affect citizens?

FJ: Citizens are consumers and users of cyber services. They play a major role in this field, as most of their activities generate personal data. They are a driving force of the digital economy, which, we must remember, is one of the five pillars of sovereignty. This data, which directly concerns users’ identity, is also governed by regulations. Citizens’ expression is therefore very important in the constitution of an area of sovereignty.

Why is the academic world concerned by this issue of cyber sovereignty?

FJ: Researchers, whether from IMT or other institutions, have insights to provide on cyber sovereignty. They are at the forefront of the development and control of new technology, which is also one of the conditions of sovereignty. They train students and work with companies to disseminate this technology. IMT and its schools are active in all these areas. We therefore also have a role to play, notably by using our neutrality to inform our parliamentarians. We have experimented in this sense with an initial event for deputies and senators on the theme of technological and regulatory sovereignty. Our researchers discussed the potential impacts of technology on citizens, businesses and the economy in general.

 

Indoor air

Indoor Air: under-estimated pollutants

While some sources of indoor air pollution are well known, there are others that researchers do not yet fully understand. This is the case for cleaning products and essential oils. The volatile organic compounds (VOCs) they become and their dynamics within buildings are being studied by chemists at IMT Lille Douai.

When it comes to air quality, staying indoors does not keep us safe from pollution. “In addition to outdoor pollutants, which enter buildings, there are the added pollutants from the indoor environment! A wide variety of volatile organic compounds are emitted by building materials, paint and even furniture,” explains Marie Verriele Duncianu, researcher in atmospheric chemistry at IMT Lille Douai. Compressed wood combined with resin, which is often used to make indoor furniture, is one of the leading sources of formaldehyde. In fact, indoor air is generally more polluted than outdoor air. This observation is not new, it has been the focus of numerous information campaigns by environmental agencies, including ADEME and the OQAI, the monitoring center for the quality of indoor air. However, the recent results of much academic research tend to show that the sources of indoor pollutants are still underestimated, and the emissions are poorly known.

In addition to sources from construction and interior design, many compounds are emitted by the occupants’ activities,” the researcher explains. Little research has been conducted on sources of volatile organic compounds such as cleaning products, cooking activities, and hygiene and personal care products. Unlike their counterparts produced by furniture and building materials, these pollutants originating from resident’s products are much more dynamic. While a wall constantly emits small quantiles of VOCs, a cleaning product spontaneously emits a quantity up to ten times more concentrated. This rapid emission makes the task of measuring the concentrations and defining the sources much more complex.

Since they are not as well known, these pollutants linked to users are also less controlled. “They are not taken into account in regulations at all,” explains Marie Verriele Duncianu. “The only legislation related to this issue is legislation for nursery schools and schools, and legislation requiring a label for construction materials.” Since 1st January 2018, institutions receiving children and young people are required to monitor the concentrations of formaldehyde and benzene in their indoor air. However, no actions have been imposed regarding the sources of these pollutants. Meanwhile, ADEME has issued a series of recommendations that advocate the use of green cleaning products for cleaning floors and buildings.

The green product paradox

These recommendations come at a time when consumers are becoming increasingly responsible in terms of their purchases, including for cleaning products. Certain cleaning products benefit from an Ecolabel, for example, guaranteeing a smaller environmental footprint. However, the impacts of these environmentally friendly products in terms of pollutant emissions has not been studied any more than it has for their label-free counterparts. Supported by marketing arguments alone, products featuring essential oils are being hailed as beneficial, without any evidence to back them up. Simply put, Researchers do not yet have a good understanding of indoor pollution, traditional cleaning products or those presented as green products. However, it is fairly easy to find false information claiming the opposite.

In fact, it was upon observing received ideas and “miracle” properties on consumer websites that Marie Verriele Duncianu decided to start a new project called ESSENTIEL.  “My fellow researchers and I saw statements claiming that essential oils purified the indoor air,” the researcher recalls. “On some blogs, we even read consumer testimonials of how essential oils eliminate pollutants. It’s not true: while they do have the ability to clean the environment in terms of bacteria, they definitely do not eliminate all air pollutants. On the contrary, they add more!”

In the laboratory, the researchers are studying the behavior of products featuring essential oils. What VOCs do they release? How are they distributed in indoor air?

 

Essential oils are in fact high in terpenes. These molecules are allergenic, particularly for the skin. They can also interact with ozone to form fine particles or formaldehyde. In focusing on essential oils and the molecules they release into the air; the ESSENTIAL project wants to help remedy this lack of knowledge about indoor pollutants. Therefore, the researchers are pursuing two objectives: understand how emissions from essential oil volatile organic compounds behave, and determine the risks related to these emissions.

The initial results show unusual emission dynamics. For floor cleaners, “there is a peak concentration of terpenes during the first half-hour following use,” explains Shadia Angulo Milhem, PhD student participating in the project with Marie Verriele Duncianu’s team. “Furthermore, the concentration of formaldehyde begins to regularly increase four hours after the cleaning activity.” Formaldehyde is a very controlled substance because it is an irritant and is carcinogenic in cases of high and repeated exposure. The concentrations measured up to several hours after the use of the cleaning products containing essential oils can be attributed to two factors. First of all, terpenes react with the ozone to create formaldehyde. Secondly, the decomposition of formaldehyde donors, used as preservatives, and biocide contained in the cleaning products.

A move towards regulatory thresholds?

In the framework of the ESSENTIAL project, researchers have not only measured cleaning products containing essential oils. They also studied diffusion devices for essential oils. The results show characteristic emissions for each device. “Reed diffusers, which are small bottles containing wooden sticks, take several hours to reach full capacity” Shadia Angulo Milhem explains. “The terpene concentrations then stabilize and remain constant for several days.” Vaporizing devices, on the other hand, which heat the oils, have a more spontaneous emission, resulting in terpene concentrations that are less permanent in the home.

In addition to the measurements of the concentrations, the dynamics of the volatile organic compounds that are released is difficult to determine. In some buildings, they can be trapped in porous materials, then released later due to changes in humidity and temperature. One of the areas the researchers want to explore in the future is how they are absorbed by indoor surfaces. Understanding the behavior of pollutants is essential in establishing the risks they present. How dangerous a compound is depends on whether it is dispersed quickly in the air or accumulates for several days in paint or in drop ceilings.

Currently, there are no regulatory thresholds for terpene concentrations in the air, due to a lack of knowledge about the public’s exposure and about long and short-term toxicity. We must keep in mind that the risk associated with exposure to a pollutant depends on the toxicity of the compound, its concentration in the air and the duration of contact. Upon completion of the ESSENTIAL project, anticipated for 2020, the project team will provide ADEME with a technical and scientific report. While waiting for legislation to be introduced, the results should at least offer recommendation sheets on the use of products containing essential oils. This will provide consumers with real information regarding the benefits as well as the potentially harmful effects of the products they purchase, a far cry from pseudo-scientific marketing arguments.

New multicast solutions could significantly boost communication between cars.

Effective communication for the environments of the future

Optimizing communication is an essential aspect of preparing for the uses of tomorrow, from new modes of transport to the industries of the future. Reliable communications are a prerequisite when it comes to delivering high quality services. Researchers from EURECOM, in partnership with The Technical University of Munich are working together to tackle this issue, developing new technology aimed at improving network security and performance.

 

In some scenarios involving wireless communication, particularly in the context of essential public safety services or the management of vehicular networks, there is one vital question: what is the most effective way of conveying the same information to a large number of people? The tedious solution would involve repeating the same message over and over again to each individual recipient, using a dedicated channel each time. A much quicker way is what is known as multicast. This is what we use when sending an email to several people at the same time, or when a news anchor is reading us the news. The sender of the information only provides it once, disseminating it via a means enabling them to duplicate it and to send it through communication channels capable of reaching all recipients.

In addition to TV news broadcasts, multicasts are particularly useful for networks comprising machines or objects set to follow on from the arrival of 5G and its future applications. This is the case, for example, with vehicle networks. “In a scenario where cars are all connected to one another, there is a whole bunch of useful information that could be shared with them using multicast technology”, explains David Gesbert, head of the Communication Systems department at EURECOM. “This could be traffic information, notifications about accidents, weather updates, etc.” The issue here is that, unlike TV sets, which do not move about while we are trying to watch the news, cars are mobile.

The mobile nature of recipients means that reception conditions are not always optimal. When driving through a tunnel, behind a large apartment block or when we’re taking our car out of the garage, it will be difficult for communication to reach our car. Despite these constraints – which affect multiple drivers at the same time – we need to be able to receive messages in order for the information service to operate effectively. “The transmission speed of the multicast has to be slowed down in order for it to be able to function with the car located in the worst reception scenario”, explains David Gesbert. What this means is that the flow rate must be lower or more power deployed for all users of the network. Just 3 cars going through a tunnel would be enough to slow down the speed at which potentially thousands of cars receive a message.

Communication through cooperation

For networks with thousands of users, it is simply not feasible to restrict the distribution characteristics in this way. In order to tackle this problem, David Gesbert and his team entered into a partnership with the Technical University of Munich (TUM) within the framework of the German-French Academy for the Industry of the Future. These researchers from France and Germany set themselves the task of devising a solution for multicast communication that would not be constrained by this “worst car” problem. “Our idea was as follows: we restrict ourselves to a small percentage of reception terminals which receive the message, but in order to offset that, we ensure that these same users are able to retransmit the message to their neighbors”, he explains. In other words: in your garage, you might not receive the message from the closest antenna, but the car out on the street 30 feet in front of your house will and will then be able to send it efficiently over a short distance.

Researchers from EURECOM and the TUM were thus able to develop an algorithm capable of identifying the most suitable vehicles to target. The message is first transmitted to everyone. Depending on whether or not reception is successful, the best candidates are selected to pass on the rest of the information. Distribution is then optimized for these vehicles through the use of the MIMO technique for multipath propagation. These vehicles will then be tasked with retransmitting the message to their neighbors through vehicle to vehicle communication. The tests carried out on these algorithms indicate a drop in network congestion in certain situations. “The algorithm doesn’t provide much out in the country, where conditions tend mostly to be good for everyone”, outlines David Gesbert. “In towns and cities, on the other hand, the number of users in poor reception conditions is a handicap for conventional multicasts, and it is here that the algorithm really helps boost network performance”.

The scope of these results extends beyond car networks, however. One other scenario in which the algorithm could be used is for the storage of popular content, such as videos or music. “Some content is used by a large number of users. Rather than going to search for them each time a request is made within the core network, these could be stored directly on the mobile terminals of users”, explains David Gesbert. In this scenario, our smartphones would no longer need to communicate with the operator’s antenna in order to download a video, but instead with another smartphone with better reception in the area onto which the content has already been downloaded.

More reliable communication for the uses of the future

The work carried out by EURECOM and the TUM into multicast technology has its roots in a more global project, SeCIF (Secure Communications for the Industry of the Future). The various industrial sectors set to benefit from the rise in communication between objects need reliable communication. Adding machine-to-machine communication to multicasts is just one of the avenues explored by the researchers. “At the same time, we have also been taking a closer look at what impact machine learning could have on the effectiveness of communication”, stresses David Gesbert.

Machine learning is breaking through into communication science, providing researchers with solutions to design problems for wireless networks. “Wireless networks have become highly heterogeneous”, explains the researcher. “It is no longer possible for us to optimize them manually because we have lost the intuition in all of this complexity”. Machine learning is capable of analyzing and extracting value from complex systems, enabling users to respond to questions that are too difficult to understand.

For example, the French and German researchers are looking at how 5G networks are able to optimize themselves autonomously depending on network usage data. In order to do this, data on the quality of the radio channel has to be fed back from the user terminal to the decision center. This operation takes up bandwidth, with negative repercussions for the quality of calls and the transmission of data over the Internet, for example. As a result, a limit has to be placed on the quantity of information being fed back. “Machine learning enables us to study a wide range of network usage scenarios and to identify the most relevant data to feed back using as little bandwidth as possible”, explains David Gesbert. Without machine learning “there is no mathematical method capable of tackling such a complex optimization problem”.

The work carried out by the German-French Academy will be vital when it comes to preparing for the uses of the future. Our cars, our towns, our homes and even our workplaces will be home to a growing number of connected objects, some of which will be mobile and autonomous. The effectiveness of communications is a prerequisite to ensuring that the new services they provide are able to operate effectively.

 

[box type=”success” align=”” class=”” width=””]

The research work by EURECOM and TUM on multicasting mentionned in this article has been published during the International Conference on Communications (ICC). It received the best paper award (category: Wireless communications) during the event, which is a highly competitive award in this scientific field.

[/box]

domain name

Domain name fraud: is the global internet in danger?

Hervé Debar, Télécom SudParis – Institut Mines-Télécom, Université Paris-Saclay

[divider style=”normal” top=”20″ bottom=”20″]

[dropcap]I[/dropcap]n late February 2019, the Internet Corporation for Assigned Names and Numbers (ICANN), the organization that manages the IP addresses and domain names used on the web, issued a warning on the risks of systemic Internet attacks. Here is what you need to know about what is at stake.

What is the DNS?

The Domain Name Service (DNS) links a domain name (for example, the domain ameli.fr for French health insurance) to an IP (Internet Protocol) address, in this case “31.15.27.86”). This is now an essential service, since it makes it easy to memorize the identifiers of digital services without having their addresses. Yet, like many former types of protocol, it was designed to be robust, but not secure.

 

DNS defines the areas within which an authority will be free to create domain names and communicate them externally. The benefit of this mechanism is that the association between the IP address and the domain name is closely managed. The disadvantage is that several inquiries are sometimes required to resolve a name, in other words, associate it with an address.

Many organizations that offer internet services have one or several domain names, which are registered with the suppliers of this registration service. These service providers are themselves registered, directly or indirectly with ICANN, an American organization in charge of organizing the Internet. In France, the reference organization is the AFNIC, which manages the “.fr” domain.

We often refer to a fully qualified domain name, or FQDN. In reality, the Internet is divided into top-level domains (TLD). The initial American domains made it possible to divide domains by type of organization (commercial, university, government, etc.). Then national domains like “.fr” quickly appeared. More recently, ICANN authorized the registration of a wide variety of top-level domains. The information related to these top-level domains is saved within a group of 13 servers distributed around the globe to ensure reliability and speed in the responses.

The DNS protocol establishes communication between the user’s machine and a domain name server (DNS). This communication allows this name server to be queried to resolve a domain name, in other words, obtain the IP address associated with a domain name. The communication also allows other information to be obtained, such as finding a domain name associated with an address or finding the messaging server associated with a domain name in order to send an electronic message. For example, when we load a page in our browser, the browser performs a DNS resolution to find the correct address.

Due to the distributed nature of the database, often the first server contacted does not know the association between the domain name and the address. It will then contact other servers to obtain a response, through an iterative or recursive process, until it has queried one of the 13 root servers. These servers form the root level of the DNS system.

To prevent a proliferation of queries, each DNS server locally stores the responses received that associate a domain name and address for a few seconds. This cache makes it possible to respond more quickly if the same request is made within a brief interval.

Vulnerable protocol

DNS is a general-purpose protocol, especially within company networks. It can therefore allow an attacker to bypass their protection mechanisms to communicate with compromised machines. This could, for example, allow the attacker to control the networks of robots (botnets). The defense response relies on the more specific filtering of communications, for example requiring the systematic use of a DNS relay controlled by the victim organization. The analysis of the domain names contained in the DNS queries, which are associated with black or white lists, is used to identify and block abnormal queries.

abdallahh/Flickr, CC BY

The DNS protocol also makes denial of service attacks possible. In fact, anyone can issue a DNS query to a service by taking over an IP address. The DNS server will respond naturally to the false address. The address is in fact the victim of the attack, because it has received unwanted traffic. The DNS protocol also makes it possible to carry out amplification attacks, which means the volume of traffic sent from the DNS server to the victim is much greater than the traffic sent from the attacker to the DNS server. It therefore becomes easier to saturate the victim’s network link.

The DNS service itself can also become the victim of a denial of service attack, as was the case for DynDNS in 2016. This triggered cascading failures, since certain services rely on the availability of DNS in order to function.

Protection against denial of service attacks can take several forms. The most commonly used today is the filtering of network traffic to eliminate excess traffic. Anycast is also a growing solution for replicating the attacked services if needed.

Cache Poisoning

A third vulnerability that was widely used in the past is to attack the link between the domain name and IP address. This allows an attacker to steal a server’s address and to attract the traffic itself. It can therefore “clone” a legitimate service and obtain the misled users’ sensitive information: Usernames, passwords, credit card information etc. This process is relatively difficult to detect.

As mentioned above, the DNS servers have the capacity to store the responses to the queries they have issued for a few minutes and to use this information to respond to the subsequent queries directly. The so-called cache poisoning attack allows an attacker to falsify the association within the cache of a legitimate server. For example, an attacker can flood the intermediate DNS server with queries and the server will accept the first response corresponding to its request.

The consequences only last a little while, the queries made to the compromised server are diverted to an address controlled by the attacker. Since the initial protocol does not include any means for verifying the domain-address association, the customers cannot protect themselves against the attack.

This often results in internet fragments, with customers communicating with the compromised DNS server being diverted to a malicious site, while customers communicating with other DNS servers are sent to the original site. For the original site, this attack is virtually impossible to detect, except for a decrease in traffic flows. This decrease in traffic can have significant financial consequences for the compromised system.

Security certificates

The purpose of the secure DNS (Domain Name System Security Extensions, DNSSEC) is to prevent this type of attack by allowing the user or intermediate server to verify the association between the domain name and the address. It is based on the use of certificates, such as those used to verify the validity of a website (the little padlock that appears in a browser web bar). In theory, a verification of the certificate is all that is needed to detect an attack.

However, this protection is not perfect. The verification process for the “domain-IP address” associations remains incomplete. This is partly because a number of registers have not implemented the necessary infrastructure. Although the standard itself was published nearly fifteen years ago, we are still waiting for the deployment of the necessary technology and structures. The emergence of services like Let’s Encrypt has helped to spread the use of certificates, which are necessary for secure navigation and DNS protection. However, the use of these technologies by registers and service providers remains uneven; some countries are more advanced than others.

Although residual vulnerabilities do exist (such as direct attacks on registers to obtain domains and valid certificates), DNSSEC offers a solution for the type of attacks recently denounced by ICANN. These attacks rely on DNS fraud. To be more precise, they rely on the falsification of DNS records in the register databases, which means either these registers are compromised, or they are permeable to the injection of false information. This modification of a register’s database can be accompanied by the injection of a certificate, if the attacker has planned this. This makes it possible to circumvent DNSSEC, in the worst-case scenario.

This modification of DNS data implies a fluctuation in the domain-IP address association data. This fluctuation can be observed and possibly trigger alerts. It is therefore difficult for an attacker to remain completely unnoticed. But since these fluctuations can occur on a regular basis, for example when a customer changes their provider, the supervisor must remain extremely vigilant in order to make the right diagnosis.

Institutions targeted

In the case of the attacks denounced by ICANN, there were two significant characteristics. First of all, they were active for a period of several months, which implies that the strategic attacker was determined and well-equipped. Secondly, they effectively targeted institutional sites, which indicates that the attacker had a strong motivation. It is therefore important to take a close look at these attacks and understand the mechanisms the attackers implemented in order to rectify the vulnerabilities, probably by reinforcing good practices.

ICANN’s promotion of the DNSSEC protocol raises questions. It clearly must become more widespread. However, there is no guarantee that these attacks would have been blocked by DNSSEC, nor even that they would have been more difficult to implement. Additional analysis will be required to update the status of the security threat for the protocol and the DNS database.

[divider style=”normal” top=”20″ bottom=”20″]

Hervé Debar, Head of the Networks and Telecommunication Services Department at Télécom SudParis, Télécom SudParis – Institut Mines-Télécom, Université Paris-Saclay

The original article (in French) has been published in The Conversation under a Creative Commons license.