Gérard Dray
IMT Mines Alès | Artificial Intelligence, Machine Learning
[toggle title=”Find here his articles on I’MTech” state=”open”]
[/toggle]
IMT Mines Alès | Artificial Intelligence, Machine Learning
[toggle title=”Find here his articles on I’MTech” state=”open”]
[/toggle]
This article is part of our dossier “Far from fantasy: the AI technologies which really affect us.”
Gauthier Picard: A smart home is made up of an assortment of fairly different objects. This is very different from industrial networks of sensors, in which devices are designed to have similar memory capacities and identical structures. In a house, we cannot put the same calculating capacity in a light bulb as in an oven, for example. If the occupant expects a varied number of operating scenarios with the objects coordinating together, it means that we must be able to take the objects’ differences into account. A smart home is also a very dynamic environment. You must be able to add things such as an intelligent light bulb, or a Raspberry-type nanocomputer to control the blinds when you want to, without hindering the performance for the user.
Olivier Boissier: We use what we call a multi-agent approach. This is central to our discipline of decentralized artificial intelligence. We use the term ‘decentralized’ instead of ‘distributed’ to really highlight that to make a house ‘smart’, we need to do more than just distribute knowledge between the different devices. The decision also needs to be decentralized. We use the term agent to describe an object, service which will manage several objects or a service which will itself manage several services. Our aim is to make these agents organize themselves via rules which allow them to exchange information in the best way possible. But not all household objects will become agents because, as Gautier said, some objects don’t have sufficient calculating capacity and are unable to organize themselves. Therefore, one of the biggest questions that we ask ourselves is whether an object should remain a simple object which perceives or executes things, such as a sensor or a small LED, and which objects will become agents.
GP: If we again use light bulbs and light as an example, we can imagine a user asking for the light level in their smart home to fall by 40% if they’re in their living room after 9pm. The user doesn’t care which object decides or acts to carry out the request, what interests them is having less light. It’s up to the global system to optimize the decisions by deciding which light bulb to turn off or whether the TV also needs to be turned off as it emits light even when it’s not being used, or whether it can leave the blinds open because it’s still daylight outside. All of these decisions need to be made in a collective manner, potentially with constraints set by the occupant who might want to lower the electricity bill, for example. A centralized entity will not manage all of these decisions, instead, each element will react depending on what the other elements do. If it is summer, and therefore still light outside, does the house need more lights on? If it does, then the agents will first turn on the bulbs which consume the least energy and emit the most light. If this is not enough, other agents will turn on other light bulbs.
GP: The problem with a centralized solution, is that it all depends on a single device. With this approach, it is very likely that all information will be stored on the cloud. If the network is faulty, or if there is too much activity, then the network’s performance will be affected. The advantage of the multi-agent approach is that it also has data which is close to where the decision is being made. Since everything is done locally, it will take less time for decisions to be made and the network will have better security. Therefore, the system is more resilient and can respond better to the privacy requirements of the users.
OB: But we are not saying that centralized solutions are necessarily worse. The multi-agent approach requires efforts to coordinate the objects and services. It should be preferred in complex environments, where it is necessary to have data close to where the decision is being made. If a centralized management algorithm works for a precise and simple action, then that’s fine. The multi-agent approach becomes interesting when there are large quantities of data which need to be processed quickly. This is the case when a smart home includes several users with multiple, sometimes conflicting, functions.
OB: In the case of lighting, a conflicting situation would be if two users in the same room have different preferences. The same agents are asked to carry out two incompatible decision-making processes. This situation can be simplified to a conflict of resources. Conflicts like this have a high chance of occurring because we are in a dynamic environment. The agents make action plans to respond to the user’s demands but if another user enters in the room, the plan will be disrupted. Therefore, conflicts can’t always be predicted in advance; they often only appear when the plan is being executed. In certain cases, simple rules mean that the problem can be resolved quickly. This happens when priority functions such as emergency assistance or the security of the building will take precedence over entertainment functions. In other cases, ways to resolve conflicts between agents must be created.
GP: Negotiation is a good example of a technique which solves this problem. Because the conflict is a fight over a resource, each agent can coordinate a bid for the functions that it wishes to use. If it wins the bid, it accumulates a debt which prevents it from winning the next one. Over time, the agents regulate themselves. By adopting an economic approach between agents, we can also try to find the Nash equilibrium. This means that each agent will maximize its output depending on what the rest of the agents want to do.
OB: There are several ways that agents can self-organize. It can be done through stigmergy, whereby the agents don’t communicate with each other; they simply act in response to what is happening around them. This can also be in response to information that is placed in their environment by other agents, which allows them to respond to the user’s request. Another method is introducing a global behavior policy for all the agents, such as privacy, and leaving the agents to interpret it in a collective manner. In this case, the user simply gives their preference on what they want to remain confidential and the agents communicate the information accordingly. We try to combine these approaches by adding more coordination protocols, such as the conflict management rules which were mentioned above.
GP: All the agents have access to a definition of their environment. They know the rules and the roles that they can play, and they adapt to this environment. It’s a bit like when you learn the Highway Code so that you know how to act when you approach a crossroads. You know what can happen and what other motorists are supposed to do. If you find yourself in a situation which does not follow the usual rules, for example because there is a traffic jam, or an accident has happened right in the middle of the crossroad, you adapt the rules. Agents should be able to do the same thing. They should react and change the system so that they can organize themselves and respond to the user’s demands.
OB: Currently, there are a lot of studies on subjects that provide effective solutions in theory for the problems that we have raised. We know how to build protocols which satisfy the organization functions, we know how to configure behavioral policies amongst agents. However, there is still a lot of work to be done to move past theory and into practice. When we have a concrete case of a smart house with large amounts of information arriving at any time, the system must be able to process that data. From a practical point of view, we also need to answer fundamental questions about what a smart home should be for the user. Should they have control over absolutely everything or can we leave the decisions to agents without user control?
GP: We have to understand that we aren’t dealing here with neural networks which make decisions like black boxes. In the case of the multi-agent approach, there is a history of the decisions of the agent, with the plan that it puts in place to reach that decision, and the reasons for creating the plan. So even if the decision is left to the agent, that doesn’t mean that the user won’t know how it came about. There is still a control mechanism, and the user can change their preferences if they need to. It’s not as if the agent decides without the user having any opportunity to know what it is doing.
OB: It’s an AI approach which is different to what people imagine artificial intelligence being. It is not yet as well known as the learning approach. Decentralized AI is still difficult for the general public to understand but there are now more and more uses for the technology, which means that it’s becoming increasingly necessary. 20 years ago, systems often had a centralized solution. Today, notably with the development of the IoT (Internet of Things), decentralization is an obligation and decentralized AI is recognized as being the most logical solution for uses such as smart homes or Smart Cities.
This article is part of our dossier “Far from fantasy: the AI technologies which really affect us.”
It’s late Saturday morning and Mrs. Little enters her usual supermarket, eyes fixed on her watch. In front of her, the aisles are overrun with shopping carts overflowing with all different types of products. She plunges into the crowd, weaving her way between the shoppers and dodging the promotional displays which block the middle of the aisles. Somehow, she manages to pick up two packs of water before fighting her way back to the other end of the store to get some dog food. As her cart becomes heavier, it becomes more difficult to maneuver. She leaves it at the end of the aisle and then wanders around the store, shopping list in hand, in search of cream, but without success.
Mrs. Little’s story is the story of every shopper who endures the ordeal of the supermarket every week. Today, consumers want to save time when they are shopping. The large retail industry is increasingly in competition with online businesses, and stores are focusing all their efforts on keeping their customers. What if one of the solutions was an intelligent consumer guidance system? “We could create an app which would automatically calculate the best possible route around the store, based on a customer’s shopping list, to reduce the amount of time that someone has to spend shopping,” suggest Marin Lujak and Arnaud Doniec, experts in artificial intelligence at IMT Lille-Douai.
To provide guidance to customers in real time, the two researchers have devised a multi-agent system. This approach consists in developing distributed artificial intelligence which is made up of several small intelligent devices, called agents, that are distributed throughout the store and interact with each other. A collective intelligence then emerges from the sum of all these interactions.
In this architecture, fixed agents, in the form of proximity sensors or cameras that are linked to the store network, evaluate the density of customers per square meter in the aisles. Other agents installed on the customers’ smart phones use the client’s shopping list, the location of the items, and the current congestion levels to calculate the itinerary for the shortest overall journey around the store. If an aisle is congested, information is sent to the app, which then guides the customer towards another part of the store and makes them return to that aisle later. At the moment, this model is purely theoretical and must be developed to be applied to real cases. For example, product-related constraints could be added: starting with the heaviest or bulkiest products, for example, and finishing with the frozen food, etc.
Large retailers are showing a growing interest in the buying experience of their clients. However, even though this solution could improve the buying experience of a customer who is in a rush, it would also mean that the client would spend less time in the store. So, why would a store manager want to invest in this tool? For Marin Lujak and Arnaud Doniec, it’s a question of balance. “Each company can take ownership of the tool and include things such as alerts for promotional offers etc. We can also imagine that companies will be able to make the most of the app by guiding the client towards certain aisles according to their centers of interest.”
Spending less time in a store is good, but it’s even better if the customer finds all the products that they need. Another way to keep consumers is to get to know and adapt to their needs. Since 2010, a researcher at IMT Mines Alès, Jacky Montmain, has been collaborating with the company TRF Retail to develop supervision and diagnostic tools for product performance. “The tool is interesting for large retailers as it allows them to track the performance of a product, or a family of products, in an entire network of stores. It allows them to make comparisons and understand where a product is being sold or not, and why,” explains Montmain. The store manager is then free to adjust the range of products that they offer on their shelves according to this data.
When shops look at their clients, they look at the revenue they bring in before anything else. But how can you identify and distinguish what people are buying in an intelligent manner when a supermarket has between 100,000 and 150,000 products on sale? Jacky Montmain and PhD student Jocelyn Poncelet answer this question by establishing a product classification system. This tree structure classification is made up of five levels: the product, family of products, department, sub-category and category. For example, a fruit yogurt is part of the yogurt family in the fresh products department and the sub-category of dairy products, which itself is in the category of food. “By providing this intelligent breakdown, we can determine the consumption habits of customers and then compare them with each other. If we settle for comparing the customer till receipts, then a fruit yogurt will be as different from a natural yogurt as it is from a laundry detergent,” explains Jocelyn Poncelet.
Customer segmentation has proven its worth thanks to a field experiment conducted in 2 stores (a small shop and another which specializes in DIY). In time, the product classification, and therefore the algorithm for identifying buying habits, should be integrated into the product performance evaluation tool mentioned above. The aim is better stock management for large retailers, and optimized sourcing from suppliers. Finally, this classification will help to strike a balance between the best range of products offered by each store and the real needs of the customers who come there.
Article written for I’MTech by Anaïs Culot.
This article is part of our dossier “Far from fantasy: the AI technologies which really affect us.”
A robotic arm scrubs the bottom of a tank in perfect synchronization with the human hand next to it. They’re moving at the same pace; a rhythm which is dictated by the way the operator moves. Sometimes fast, then slightly slower, this harmony of movement is achieved through the artificial intelligence that is installed in the anthropomorphic robot. In the neighboring production area, a self-driving vehicle dances around the factory. It dodges every obstacle in its way, until it delivers the parts that it’s transporting to a manager on the production line. In impeccable timing, the human operator retrieves the parts and then, once they have finished assembling them, leaves them on the transport tray of the small vehicle, which sets off immediately. During its journey, the machine passes several production areas, where humans and robots carry out their jobs “hand-in-hand”.
Even though anthropomorphic robots like robotic arms or small self-driving vehicles are already being used in some factories, they are not yet capable of collaborating with humans in this way. Currently, robot manufacturers pre-integrate sensors and algorithms into the device. However, their future interactions with humans are not considered during their development. “At the moment, there are a lot of situations where human operators and robots work side-by-side in factories, but don’t interact a lot together. This is because robots don’t understand humans when they’re in contact with them,” explains Sotiris Manitsaris, a specialized researcher in collaborative robotics at Mines ParisTech.
Human-robot collaboration, or cobotization, is an emerging field in robotics which redefines the function of robots as working “with” and not “instead of” humans. By neutralizing human and robotic weaknesses with the assets of the other, this approach allows factory productivity to increase, whilst still retaining jobs. The human workers bring flexibility, dexterity and decision making, whilst the robots bring efficiency, speed and precision. But to be able to collaborate properly, robots have to be flexible, interactive and, above all, intelligent. “Robotics is the tangible aspect of artificial intelligence. It allows AI to act on the outside world with a constant perception-action loop. Without this, the robot would not be able to operate,” says Patrick Hénaff, specialist in bio-inspired artificial intelligence at Mines Nancy. From the automotive to the fashion and luxury goods industries, all sectors are interested in integrating robotic collaboration.
Beyond direct interaction between humans and machines, the entire production cycle could become more flexible. This would depend more on the operator’s pace and the way that they work. “The robot has to respond to the needs of humans but also anticipate their behavior. This allows them to adapt dynamically,” explains Sotiris Manitsaris. For example, on an assembly line in the automotive industry, each task is carried out in a specific time. If the robot anticipates the operator’s movements, then it can also adapt to their speed. This issue has been the focus of work with PSA Peugeot Citroën as part of the chair in Robotics and Virtual Reality at Mines ParisTech. So far, researchers have been able to put in place the first promising human-robot collaborations. In this collaboration, which took place on a work bench, a robot brought parts depending on the execution speed of the operator. The operator then assembled them and screwed them together, before giving them back to the robot.
Read on I’MTech: The future of production systems, between customization and sustainable development
Another aim of cobotics is to alleviate human operators of difficult tasks. As part of the Horizon 2020 Framework launched at the end of 2018, Sotiris Manitsaris has tackled the development of ergonomic gesture recognition technologies and the communication of this information to robots. To do this, first of all, the gestures are recorded with the help of communicating objects (smart watch, smart phone, etc.) which the operator wears. The gestures are then learned by artificial intelligence. These new models of collaboration, which are centered around humans and their actions, are conceptualized so they can be implemented on any robotic model. From now on, once the movement is recognized, the question is knowing what information to communicate to the robot. This is so it can adapt its behavior without affecting its own performance, nor the performance of the human collaborator.
Understanding movements and implementing them in robots is also central to the work conducted by Patrick Hénaff. His latest work uses an approach inspired by neurobiology and is based on knowledge of animal motor systems. “We can consider artificial intelligence as being made up of a high-level structure, the brain, and of lower-level intelligence which can be dedicated to movement control without needing to receive higher-level information permanently,” Hénaff explains. More particularly, this research deals with rhythmic gestures, in other words, with automatic movements which are not ordered by our brain. Instead, these gestures are commanded by our neural networks, located in our spinal cord. For example, in instances such as walking, or wiping a surface with a sponge.
Once the action is initiated by the brain, a rhythmic movement occurs naturally and at a pace which is dictated by our morphology. However, it has been demonstrated that for some of these gestures, the human body is able to synchronize naturally with external (visual or aural) signals which are equally rhythmic. For example, this happens when two people walk together. “In our algorithms, we try to determine which external signals we need to integrate into our equations. This is so that a machine synchronizes with either humans or its environment when it carries out a rhythmic gesture,” describes Patrick Hénaff.
In the laboratory, researchers have demonstrated that robots can carry out rhythmic tasks without physical contact. With the help of a camera, a robot observes the hand gestures of a person who is saying hello and can then reproduce it at the same pace and synchronize itself with the gesture. The experiments were also carried out on an interaction with contact – a handshake. The robot learns the way to hold a human hand and synchronizes its movement with the person opposite them.
In an industrial setting, an operator carries out numerous rhythmic gestures, such as sawing a pipe, scrubbing the bottom of a tank, or even polishing a surface. To carry out tasks in cooperation with an operator, the robot has to be able to reproduce its movements. For example, if a robot saws a pipe with a human, then the machine must adapt its rhythm so that it does not cause musculoskeletal disorders. “We have just launched a partnership with a factory in order to carry out a proof of concept. This will demonstrate that new generation robots can carry out, in a professional environment, a rhythmic task which doesn’t need a precise trajectory but in which the final result is correct,” describes Patrick Hénaff. Now, researchers want to tackle dangerous environments and the most arduous tasks for operators, not with the aim of replacing them, but helping them work “hand-in-hand”.
Article written for I’MTech by Anaïs Culot
EURECOM | 5G, Telecommunications, Networks, Connected objects
[toggle title=”Find here his articles on I’MTech” state=”open”]
[/toggle]
Jenny Faucheu: They are based on thermography, which is used in thermal diagnosis, for example. The technique produces colorful images that indicate thermal radiation. The principle of cameras that produce this kind of image is based on the capture of distant infrared wavelengths: these are wavelengths of light that are greater than those of visible light, and correspond to the electromagnetic radiation of an object whose temperature is in the region of ten to several hundred degrees. The image displayed reflects the quantities of these wavelengths.
JF: We use a material based on vanadium dioxide. It has thermochromic properties, meaning that its ability to emit infrared rays will change according to the temperature. More precisely, we use a polymorph of this vanadium oxide – a particular crystalline form. When heated to above 70°C, its crystalline form changes and the material passes from 80% energy radiation to 40%, making it appear colder than it actually is on thermal cameras. 40% radiation from an object at 75°C will still correspond to less radiation than 80% of an object at 65°C. This is one of the two camouflage properties we aim to develop.
JF: Thermographic cameras that produce multicolor images are not the only cameras based on infrared emission. The other detection mechanism is the one used by cameras that produce grayscale night images. These cameras amplify near-visible infrared wavelengths and display them in white in the image. Things that emit no infrared radiation are displayed in black. If there is not enough energy to amplify on the image, the camera emits a beam and records what is reflected back to it, a bit like a sonar. In this case, even if the vanadium oxide material emits less radiation, it will still be detected because it will reflect the camera beam.
JF: We need to work on the surface texture of the materials and their structure. The approach we use consists in laser texturing the vanadium oxide material. We shape the surface to disperse the infrared rays emitted by the camera in different directions. To do this, we are working with Manutech-USD, which has a laser texturing platform capable of working on large and complex parts. Since the beam is not reflected back towards the camera, it is as though it had passed straight through the object. As far as the camera is concerned, if it receives no reflection there is nothing in front of it. Objects that should be displayed in white in the image without camouflage will instead be displayed in black.
JF: MAGIC is a response to the ASTRID call, whose projects are funded by the Directorate General for Armaments (DGA). The planned applications are therefore essentially military. We are working with Hexadrone to build a surveillance drone like those found in stores… a stealthy one. We also want to show that it is possible to reduce the thermal signature of engines and infantrymen. By adding a few tungsten atoms to the vanadium oxide material, the temperature for crystalline form change can be decreased from about 70°C to about 35°C. This is very practical for potential human applications. A normally dressed person would appear at 37°C on a camera, but a suit made of this special material could make them undetectable by making them appear much colder.
Institut Mines-Télécom Business School | Digital intelligence, Digital transformation , Data intelligence
Imed Boughzala is Professor of Information Systems and Director of the TIM (Technology, Information & Management, Ex-DSI) department at IMT-BS. PhD in Computer Science from the University of Paris Pierre & Marie Curie and HDR accredited in both Computer Science and Management Science, Imed has a deep and rich international experience acquired through his research, lecturing and collaboration on major projects. He has recently completed an Executive MBA at IMT-BS and the program “Management and Leadership in higher education” at Harvard Graduate School of Education.
His research interests focus on Digital Intelligence and Digital Transformation. He is the founder of SMART BIS (Smart Business Information Systems) research team and presently Director of the IS (Innovation Support) Lab, which includes scholars from different areas working on the future IS generation. Since September 2018, he is co-heading of the observatory of digital transformation within business schools and member of the labeling colleges of pedagogical initiatives at the FNEGE.
[toggle title=”Find his articles on I’MTech” state=”open”]
[/toggle]
These “little neutral particles” are among the most mysterious in the universe. “Neutrinos have no electric charge, very low mass and move at a speed close to that of light. They are hard to study because they are extremely difficult to detect,” explains Richard Dallier, member of the KM3NeT team from the Neutrino group at Subatech laboratory[1]. “They interact so little with matter that only one particle out of 100 billion encounters an atom!”
Although their existence was first postulated in the 1930s by physicist Wolfgang Pauli, it was not confirmed experimentally until 1956 by American physicists Frederick Reines and Clyde Cowan–awarded the Nobel Prize in Physics for this discovery in 1995. This was a small revolution for particle physics. “It could justify the excess matter that enabled our existence. The Big Bang created as much matter as it did antimatter, but they mutually annihilated each other very quickly. So, there should not be any left! We hope that studying neutrinos will help us understand this imbalance,” Richard Dallier explains.
While there is still much to discover about these bashful particles, we do know that neutrinos exist in three forms or “flavors”: the electron neutrino, the muon neutrino and the tau neutrino. The neutrino is certainly an unusual particle, capable of transforming over the course of its journey. This phenomenon is called oscillation: “The neutrino, which can be generated from different sources, including the Sun, nuclear power plants and cosmic rays, is born as a certain type, takes on a hybrid form combining all three flavors as it travels and can then appear as a different flavor when it is detected,” Richard Dallier explains.
The oscillation of neutrinos was first revealed in 1998 with the Super-Kamiokande experiment, a Japanese neutrino observatory, which also received the Nobel Prize in Physics in 2015. This change in identity is key: it provides indirect evidence that neutrinos indeed have a mass, albeit extremely low. However, another mystery remains: what is the mass hierarchy of these 3 flavors? The answer to this question would further clarify our understanding of the Standard Model of particle physics.
The singularity of neutrinos is a fascinating area of study. An increasing number of observatories and detectors dedicated to the subject are being installed in great depths, where the combination of darkness and concentration of matter is ideal. Russia has installed a detector at the bottom of Lake Baikal and the United States in the South Pole. Europe, on the other hand, is working in the depths of the Mediterranean Sea. This phenomenon of fishing for neutrinos first began in 2008 with the Antares experiment, a unique type of telescope that can detect even the faintest light crossing the depths of the sea. Antares then made way for KM3NeT, with improved sensitivity to orders of magnitude. This experiment has brought together nearly 250 researchers from around 50 laboratories and institutes, including four French laboratories. In addition to studying the fundamental properties of neutrinos, the collaboration aims to discover and study the astrophysical sources of cosmic neutrinos.
KM3NeT is actually comprised of two gigantic neutrino telescopes currently being installed at the bottom of the Mediterranean Sea. The first, called ORCA (Oscillation Research with Cosmics in the Abyss), is located off the coast of Toulon in France. Submerged at a depth of nearly 2,500 meters, it will eventually be composed of 115 strings attached to the seabed. “Optical detectors are placed on each of these 200-meter flexible strings, spaced 20 meters apart: 18 spheres measuring 45 centimeters spaced 9 meters apart each contain 31 light sensors,” explains Richard Dallier, who is participating in the construction and installation of these modules. “This unprecedented density of detectors is required in order to study the properties of the neutrinos: their nature, oscillations and thus their masses and classification thereof. The sources of neutrinos ORCA will focus on are the Sun and the terrestrial atmosphere, where they are generated in large numbers by the cosmic rays that bombard the Earth.”
The second KM3Net telescope is ARCA (Astroparticles Research with Cosmics in the Abyss). It will be located 3,500 meters under the sea in Sicily. There will be twice as many strings, which will be longer (700 meters) and spaced further apart (90 meters), but with the same number of sensors. With a volume of over one km3—hence the name KM3Net for km3 Neutrino Telescope—ARCA will be dedicated to searching for and observing the astrophysical sources of neutrinos, which are much rarer. A total of over 6,000 optical modules containing over 200,000 light sensors will be installed by 2022. These numbers make KM3Net the largest detector in the world, in equal position with its cousin, IceCube, in Antarctica.
Both ORCA and ARCA operate on the same principle based on the indirect detection of neutrinos. When a neutrino encounters an atom of matter, such as an atom of air, water, or the Earth itself—since they easily travel right through it—the neutrino can “deposit” its energy there. This energy is then instantly transformed into one of the three particles of the flavor of the neutrino: an electron, a muon or a tau. This “daughter” particle then continues its journey on the same path as the initial neutrino and at the same speed, emitting light in the atmosphere it is passing through, or interacting itself with atoms in the environment and disintegrating into other particles, which will also radiate as blue light.
“Since this is all happening at the speed of light, an extremely short light pulse of a few nanoseconds occurs. If the environment the neutrino is passing through is transparent–which is the case for the water in the Mediterranean Sea–and the path goes through the volume occupied by ORCA or ARCA, the light sensors will detect this extremely faint flash,” Richard Dallier explains. Therefore, if several sensors are touched, we can reconstitute every direction of the trajectory and determine the energy and nature of the original neutrino. But regardless of the source, the probability of neutrino interactions remains extremely low: with a volume of 1 km3, ARCA only expects to detect a few neutrinos originating from the universe.
Seen as cosmic messengers, these phantom particles open a window onto a violent universe. “Among other things, the study of neutrinos will provide a better understanding and knowledge of cosmic cataclysms,” says Richard Dallier. The collisions of black holes and neutron stars, supernovae, and even massive stars that collapse, produce gusts of neutrinos that bombard us without being absorbed or deflected in their journey. This means that light is no longer the only messenger of the objects in the universe.
Neutrinos have therefore strengthened the arsenal of “multi-messenger” astronomy, involving the cooperation of a maximum number of observatories and instruments throughout the world. Each wavelength and particle contributes to the study of various processes and additional aspects of astrophysical objects and phenomena. “The more observers and objects observed, the greater the chances of finding something,” Richard Dallier explains. And in these extraterrestrial particles lies the possibility of tracing our own origins with greater precision.
[1] SUBATECH is a research laboratory co-operated by IMT Atlantique, the Institut National de Physique Nucléaire et de Physique des Particules (IN2P3) of CNRS, and the Université de Nantes.
Article written for I’MTech (in French) by Anne-Sophie Boutaud
Internet research has become an automatic reflex to learn about any disease. From the common cold to the rarest diseases, patients find information about their cases through more or less specialized sites. Scientific publications have already shown that social networks and health forums are especially used by patients when they are diagnosed. However, the true usefulness of the Internet, apps or smart objects for patients remains unclear. To gain a better understanding of how new technology helps patients, the Impatients, Chroniques & Associés collective (ICA) contacted the Smart Objects and Social Networks Chair at Institut Mines-Télécom Business School. The study, conducted between February and July 2018, focused on people living with chronic disease and their use of digital technology. The results were presented on February 20, 2019 at the Cité des Sciences et de l’Industrie in Paris.
More than 1,013 patients completed the questionnaire designed by the researchers. The data collected on technology usage shows that, overall, patients are not very attracted by smart objects. 71.8% of respondents reported that they used the Internet only, 1 to 3 times a month. 19.3% said they used both the Internet and mobile applications on a weekly basis. Only 8.9% use smart objects in addition to the Internet and apps.
Read on I’MTech Healthcare: what makes some connected objects a success and others a flop?
The study therefore shows that uses are very different and that a certain proportion of patients are characterized by the “multi-technology” category. However, “contrary to what we might think, the third group comprising the most connected respondents is not necessarily made up of the youngest people,” indicates Christine Balagué, holder of the Smart Objects and Social Networks chair. In the 25-34 age group, the study found “almost no difference between the three technology use groups (20% of each use group is in this age group)“. The desire for digital health solutions is therefore not a generational issue.
The specificity of the study is that it cross-references the use of digital technology (Internet, mobile applications and smart objects) with standard variables in publications that characterize patients’ behavior towards their health. This comparison revealed a new result: the patients who use technology the most are on average no more knowledgeable about their disease than patients who are not very connected. They are also no more efficient in their ability to adopt preventive behavior related to their disease.
“On the other hand, the more connected patients are, the greater their ability is to take action in the management of their disease,” says Christine Balagué. Patients in the most connected category believe they are better able to make preventive decisions and to reassure themselves about their condition. However, technology has little impact on the patient-doctor relationship. “The benefit is relative: there is a difference between the benefit perceived by the patient and the reality of what digital tools provide,” concludes Christine Balagué.
Some of the criteria measured by the researchers nevertheless show a correlation with the degree of use of technology and the use of several technology devices. This is the case, for example, with patient empowerment. Notably, the most connected patients are also those who most frequently take the initiative to ask their doctor for information or give their opinion about treatment. These patients also report being most involved by the doctor in medical care. On this point, the study concludes that:
“The use of technology[…] does not change the self-perception of chronically ill patients, who all feel equally knowledgeable about their disease regardless of their use of digital technology. On the other hand, access to this information may subtly change their position in their interactions with the medical and nursing teams, leading to a more positive perception of their ability to play a role in decisions concerning their health.”
Although information found on the Internet offers genuine benefits in the relationship with the medical profession, the use of technologies also has some negative effects, according to patient feedback. 45% believe that the use of digital technology has negative emotional consequences. “Patients find that the Internet reminds them of the disease on a daily basis, and that this increases stress and anxiety,” says Christine Balagué. This result may be linked to the type of use among the chronically ill. The vast majority of them generally search for stories from other people with similar pathologies, which frequently exposes them to the experiences of other patients and their relatives.
Personal stories are considered the most reliable source of information by patients, ahead of content provided by health professionals and patient associations, a fact due to the large, and unequal, amount of information available. Three quarters of respondents indicated that it is difficult to identify and choose reliable information. This sense of mistrust is underlined by other data collected by the researchers during the questionnaire: “71% believe that the Internet is likely to induce self-diagnosis errors.” In addition, a certain proportion of patients (48%) also express mistrust of the privacy of certain mobile sites and applications. This point highlights the challenge for applications and websites to improve the transparency of the use of personal data and respect for privacy, in order to gain their trust.
Read on I’MTech Ethical algorithms in health: a technological and societal challenge
The future development of dedicated web services and patient usage is an issue that researchers want to address. “We want to continue this work of collecting experiences to evaluate changes in use over time,” says Christine Balagué. The continuation of this work will also integrate other developing uses, such as telemedicine and its impact on patients’ quality of life. Finally, the researchers are also considering taking an interest in the other side: the doctors’ side. How do practitioners use digital technologies in their practice? What are the benefits in the relationship with the patient? By combining the results from patient and physician studies, the aim will be to obtain the most accurate portrait possible of patient-physician relationships and of treatment processes in the era of hyperconnectivity.
Francis Jutand: The notion of sovereignty can apply to individuals, companies or nations. To be sovereign is to be able to choose. This means being able to both understand and act. Sovereignty is therefore based on a number of components for taking action: technological development, economic and financial autonomy (and therefore power), and the ability to influence regulatory mechanisms. In addition to these three conditions, there is security, in the sense that being sovereign also means being in a space where you can protect yourself from the potential hostility of others. The fifth and final parameter of sovereignty for large geographical areas, such as nations or economic spaces, is the people’s ability to make their voices heard.
FJ: The five components of the ability to act transpose naturally into this field. Being sovereign in a digital world means having our own technology and being independent from major economic players in the sector, such as Google, and their huge financial capacity. It also means developing specific regulations on digital technology and being able to protect against cyber-attacks. As far as the general public is concerned, sovereignty consists in training citizens to understand and use digital technology in an informed way. Based on these criteria, three main zones of cyber sovereignty can be defined around three geographical regions: the United States, Europe and China.
FJ: The American zone is based on economic superpowers and powerful national policy on security and technology operated by government agencies. On the other hand, the state of their regulation in the cyber field is relatively weak. China relies on an omnipresent state with strict regulation and major investments. After its scientific and industrial backwardness in this area, China has caught up over the past few years. Lastly, Europe has good technological skills in both industry and academia, but is not in a leading position. In its favor, the region of European sovereignty has strong market power and pioneering regulations based on certain values, such as the protection of personal data. Its biggest weakness is its lack of economic leadership that could lead to the existence of global digital players.
FJ: Europe and its member countries are already investing at a high level in the digital field, through the European Framework Programmes, as well as national programs and ongoing academic research. On the other hand, the small number of world-class companies in this field weakens the potential for research and fruitful collaborations between the academic and industrial worlds. The European Data Protection Board, which is composed of the national data protection authorities of the European Union member states, is another illustration of sovereignty work in the European zone. However, from the point of view of regulations concerning competition law and financial regulation, Europe is still lagging behind in the development of laws and is unassertive in their interpretation. This makes it vulnerable to lobbies as shown by the debates on the European directive on copyright.
FJ: Citizens are consumers and users of cyber services. They play a major role in this field, as most of their activities generate personal data. They are a driving force of the digital economy, which, we must remember, is one of the five pillars of sovereignty. This data, which directly concerns users’ identity, is also governed by regulations. Citizens’ expression is therefore very important in the constitution of an area of sovereignty.
FJ: Researchers, whether from IMT or other institutions, have insights to provide on cyber sovereignty. They are at the forefront of the development and control of new technology, which is also one of the conditions of sovereignty. They train students and work with companies to disseminate this technology. IMT and its schools are active in all these areas. We therefore also have a role to play, notably by using our neutrality to inform our parliamentarians. We have experimented in this sense with an initial event for deputies and senators on the theme of technological and regulatory sovereignty. Our researchers discussed the potential impacts of technology on citizens, businesses and the economy in general.