Infographie représentant l'interconnexion entre différents systèmes

Machines surfing the web

There has been constant development in the area of object interconnection via the internet. And this trend is set to continue in years to come. One of the solutions for machines to communicate with each other is the Semantic Web. Here are some explanations of this concept.

 “The Semantic Web gives machines similar web access to that of humans,” indicates Maxime Lefrançois, Artificial Intelligence researcher at Mines Saint-Etienne. This area of the web is currently being used by companies to gather and share information, in particular for users. It makes it possible to adapt product offers to consumer profiles, for example. At present, the Semantic Web occupies an important position in research undertaken around the Internet of Things, i.e. the interconnection between machines and objects connected via the internet.

By making machines work together, the Internet of Things can be a means of developing new applications. This would serve both individuals and professional sectors, such as intelligent buildings or digital agriculture. The last two examples are also the subject of the CoSWoT1 project, funded by the French National Research Agency (ANR). This initiative, in which Maxime Lefrançois is participating, aims to provide new knowledge around the use of the Semantic Web by devices.

To do so, the projects’ researchers installed sensors and actuators in the INSA Lyon buildings on the LyonTech-la Doua campus, the Espace Fauriel building of Mines Saint-Etienne, and the INRAE experimental farm in Montoldre. These sensors record information, like the opening of a window or the temperature and CO2 levels in a room. Thanks to a digital representation of a building or block, scientists can construct applications that use the information provided by sensors, enrich it and make decisions that are transmitted to actuators.

Such applications can measure the CO2 concentration in a room, and according to a pre-set threshold, open the windows automatically for fresh air. This could be useful in the current pandemic context, to reduce the viral load in the air and thereby reduce the risk of infection. Beyond the pandemic, the same sensors and actuators can be used in other cases for other purposes, such as to prevent the build-up of pollutants in indoor air.

A dialog with cards

The main characteristic of the Semantic Web is that it registers information in knowledge graphs: kinds of maps made up of nodes representing objects, machines or concepts, and arcs that connect them to one another, representing their relationships. Each hub and arc is registered with an Internationalized Resource Identifier (IRI): a code that makes it possible for machines to recognize each other and identify and control objects such as a window, or concepts such as temperature.

Depending on the number of knowledge graphs built up and the amount of information contained, a device will be able to identify objects and items of interest with varying degrees of precision. A graph that recognizes a temperature identifier will indicate, depending on its accuracy, the unit used to measure it. “By combining multiple knowledge graphs, you obtain a graph that is more complete, but also more complex,” declares Lefrançois. “The more complex the graph, the longer it will take for the machine to decrypt,” adds the researcher.

Means to optimize communication

The objective of the CoSWoT project is to simplify dialog between autonomous devices. It is a question of ‘integrating’ the complex processing linked with the Semantic Web into objects with low calculating capabilities and limiting the amount of data exchanged in wireless communication to preserve their batteries. This represents a challenge for Semantic Web research.  “It needs to be possible to integrate and send a small knowledge graph in a tiny amount of data,” explains Lefrançois. This optimization makes it possible to improve the speed of data exchanges and related decision-making, as well as to contribute greater energy efficiency.

With this in mind, the researcher is interested in what he calls ‘semantic interoperability’, with the aim of “ensuring that all kinds of machines understand the content of messages that they exchange,” he states. Typically, a connected window produced by one company must be able to be understood by a CO2 sensor developed by another company, which itself must be understood by the connected window. There are two approaches to achieve this objective. “The first is that machines use the same dictionary to understand their messages,” specifies Lefrançois, “The second involves ensuring that machines solve a sort of treasure hunt to find how to understand the messages that they receive,” he continues. In this way, devices are not limited by language.

IRIs in service of language

Furthermore, solving these treasure hunts is allowed by IRIs and the use of the web. “When a machine receives an IRI, it does not need to automatically know how to use it,” declares Lefrancois. “If it receives an IRI that it does not know how to use, it can find information on the Semantic Web to learn how,” he adds. This is analogous to how humans may search for expressions that they do not understand online, or learn how to say a word in a foreign language that they do not know.

However, for now, there are compatibility problems between various devices, due precisely to the fact that they are designed by different manufacturers. “In the medium term, the CoSWoT project could influence the standardization of device communication protocols, in order to ensure compatibility between machines produced by different manufacturers,” the researcher considers. It will be a necessary stage in the widespread roll-out of connected objects in our everyday lives and in companies.

While research firms are fighting to best estimate the position that the Internet of Things will hold in the future, all agree that the world market for this sector will represent hundreds of billions of dollars in five years’ time. As for the number of objects connected to the internet, there could be as many as 20 to 30 billion by 2030, i.e. far more than the number of humans. And with the objects likely to use the internet more than us, optimizing their traffic is clearly a key challenge.

[1] The CoSWoT project is a collaboration between the LIMOS laboratory (UMR CNRS 6158 which includes Mines Saint-Étienne), LIRIS (UMR CNRS 5205), Hubert Curien laboratory (UMR CNRS 5516) INRAE, and the company Mondeca.

Rémy Fauvel

Read on I’MTech

AI-4-Child “Chaire” research consortium: innovative tools to fight against childhood cerebral palsy

In conjunction with the GIS BeAChild, the AI-4-Child team is using artificial intelligence to analyze images related to cerebral palsy in children. This could lead to better diagnoses, innovative therapies and progress in patient rehabilitation. But also a real breakthrough in medical imaging.

The original version of this article was published on the IMT Atlantique website, in the News section.

Cerebral palsy is the leading cause of motor disability in children, affecting nearly two out of every 1,000 newborns. And it is irreversible. The AI-4-Child chaire (French research consortium), managed by IMT Atlantique and the Brest University Hospital, is dedicated to fighting this dreaded disease, using artificial intelligence and deep learning, which could eventually revolutionize the field of medical imaging.

“Cerebral palsy is the result of a brain lesion that occurs around birth,” explains François Rousseau, head of the consortium, professor at IMT Atlantique and a researcher at the Medical Information Processing Laboratory (LaTIM, INSERM unit). “There are many possible causes – prematurity or a stroke in utero, for example. This lesion, of variable importance, is not progressive. The resulting disability can be more or less severe: some children have to use a wheelchair, while others can retain a certain degree of independence.”

Created in 2020, AI-4-Child brings together engineers and physicians. The result of a call for ‘artificial intelligence’ projects from the French National Research Agency (ANR), it operates in partnership with the company Philips and the Ildys Foundation for the Disabled, and benefits from various forms of support (Brittany Region, Brest Metropolis, etc.). In total, the research program has a budget of around €1 million for a period of five years.

Chaire AI-4-Child, François Rousseau
François Rousseau, professor at IMT Atlantique and head of the AI-4-Child chaire (research consortium)

Hundreds of children being studied in Brest

AI-4-Child works closely with BeAChild*, the first French Scientific Interest Group (GIS) dedicated to pediatric rehabilitation, headed by Sylvain Brochard, professor of physical medicine and rehabilitation (MPR). Both structures are linked to the LaTIM lab (INSERM UMR 1101), housed within the Brest CHRU teaching hospital. The BeAChild team is also highly interdisciplinary, bringing together engineers, doctors, pediatricians and physiotherapists, as well as psychologists.

Hundreds of children from all over France and even from several European countries are being followed at the CHRU and at Ty Yann (Ildys Foundation). By bringing together all the ‘stakeholders’ – patients and families, health professionals and imaging specialists – on the same site, Brest offers a highly innovative approach, which has made it a reference center for the evaluation and treatment of cerebral palsy. This has enabled the development of new therapies to improve children’s autonomy and made it possible to design specific applications dedicated to their rehabilitation.

“In this context, the mission of the chair consists of analyzing, via artificial intelligence, the imagery and signals obtained by MRI, movement analysis or electroencephalograms,” says Rousseau. These observations can be made from the fetal stage or during the first years of a child’s life. The research team is working on images of the brain (location of the lesion, possible compensation by the other hemisphere, link with the disability observed, etc.), but also on images of the neuro-musculo-skeletal system, obtained using dynamic MRI, which help to understand what is happening inside the joints.

‘Reconstructing’ faulty images with AI

But this imaging work is complex. The main pitfall is the poor quality of the images collected, due to camera shake or artifacts during the shooting. So AI-4-Child is trying to ‘reconstruct’ them, using artificial intelligence and deep learning. “We are relying in particular on good quality views from other databases to achieve satisfactory resolution,” explains the researcher. Eventually, these methods should be able to be applied to routine images.

Significant progress has already been made. A doctoral student is studying images of the ankle obtained in dynamic MRI and ‘enriched’ by other images using AI – static images, but in very high resolution. “Despite a rather poor initial quality, we can obtain decent pictures,” notes Rousseau.  Significant differences between the shapes of the ankle bone structure were observed between patients and are being interpreted with the clinicians. The aim will then be to better understand the origin of these deformations and to propose adjustments to the treatments under consideration (surgery, toxin, etc.).

The second area of work for AI-4-Child is rehabilitation. Here again, imaging plays an important role: during rehabilitation courses, patients’ gait is filmed using infrared cameras and a system of sensors and force plates in the movement laboratory at the Brest University Hospital. The ‘walking signals’ collected in this way are then analyzed using AI. For the moment, the team is in the data acquisition phase.

Several areas of progress

The problem, however, is that a patient often does not walk in the same way during the course and when they leave the hospital. “This creates a very strong bias in the analysis,” notes Rousseau. “We must therefore check the relevance of the data collected in the hospital environment… and focus on improving the quality of life of patients, rather than the shape of their bones.”

Another difficulty is that the data sets available to the researchers are limited to a few dozen images – whereas some AI applications require several million, not to mention the fact that this data is not homogeneous, and that there are also losses. “We have therefore become accustomed to working with little data,” says Rousseau. “We have to make sure that the quality of the data is as good as possible.” Nevertheless, significant progress has already been made in rehabilitation. Some children are able to ride a bike, tie their shoes, or eat independently.

In the future, the AI-4-Child team plans to make progress in three directions: improving images of the brain, observing bones and joints, and analyzing movement itself. The team also hopes to have access to more data, thanks to a European data collection project. Rousseau is optimistic: “Thanks to data processing, we may be able to better characterize the pathology, improve diagnosis and even identify predictive factors for the disease.”

* BeAChild brings together the Brest University Hospital Centre, IMT Atlantique, the Ildys Foundation and the University of Western Brittany (UBO). Created in 2020 and formalized in 2022 (see the French press release), the GIS is the culmination of a collaboration that began some fifteen years ago on the theme of childhood disability.

Planning for the worst with artificial intelligence

Given that catastrophic events are rare by nature, it is difficult to prepare for them. However, artificial intelligence offers high-performing tools for modeling and simulation, and is therefore an excellent tool to design, test and optimize the response to such events. At IMT Mines Alès, Satya Lancel and Cyril Orengo are both undertaking research on emergency evacuations, in case of events like a dam breaking or a terrorist attack in a supermarket.

“Supermarkets are highly complex environments in which individuals are saturated with information,” explains Satya Lancel, PhD student in Risk Science at Université Montpellier III and IMT Mines Alès. Her thesis, which she started over two years ago, is on the subject of affordance, a psychological concept that states that an object or element in the environment is able to suggest its own use. With this research, she wishes to study the link between the cognitive processes involved in decision-making and the functionality of objects in their environment.

In her thesis, Lancel specifically focuses on affordance in the case of an armed attack within a supermarket. She investigates, for example, how to optimize instructions to encourage customers to head towards emergency evacuation exits. “The results of my research could act as a foundation for future research and be used by supermarket brands to improve signage or staff training, in order to improve evacuation procedures”, she explains.

Lancel and her team obtained funding from the brand U to perform their experiments. This agreement allowed them to study the situational and cognitive factors involved in customer decision-making in several U stores. “One thing we did in the first part of my research plan was to observe customer behavior when we added or removed flashing lights at the emergency exits,” she describes. “We remarked that when there was active signage, customers are more likely to decide to head towards the emergency exits than when there was not,” says the scientist. This result suggests that signage has a certain level of importance in guiding people’s decision-making, even if they do not know the evacuation procedure in advance.

 “Given that it is forbidden to perform simulations of armed attacks with real people, we opted for a multi-agent digital simulation”, explains Lancel. What is unique about this kind of simulation is that each agent involved is conceptualized as an autonomous entity with its own characteristics and behavioral model. In these simulations, the agents interact and influence each other with their behavior. “These models are now used more and more in risk science, as they are proving to be extremely useful for analyzing group behavior,” she declares.

To develop this simulation, Lancel collaborated with Vincent Chapurlat, digital systems modeling researcher at IMT Mines Alès. “The model we designed is a three-dimensional representation of the supermarket we are working on,” she indicates. In the simulation, aisles are represented by parallepipeds, while customers and staff are represented by agents defined by points. By observing how agents gather and how the clusters they form move around, interact and organize themselves, it is possible to determine which group behaviors are most common in the event of an armed attack, no matter the characteristics of the individuals.

Representing the complexity of reality

Outside of supermarkets, Cyril Orengo, PhD student in Crisis Management at the Risk Science Laboratory at IMT Mines Alès, is studying population evacuation in the event of dam failure. The case study chosen by Orengo is the Sainte-Cécile-d’Andorge dam, near the city of Alès. Based on digital modeling of the Alès urban area and individuals, he plans to compare evacuation time for a range of scenarios and perform cartography of various city roads that are likely to be blocked. “One of the aims of this work is to build a knowledge base that could be used in the future by researchers working on preventive evacuations,” indicates the doctoral student.

He, too, has chosen to use a multi-agent system to simulate evacuations, as this method makes it possible to combine individual parameters with agents to produce situations that tend to be close to a credible reality. “Among the variables selected in my model are certain socio-economic characteristics of the simulated population,” he specifies. “In a real-life situation, an elderly person may take longer to go somewhere than a young person: the multi-agent system makes it possible to reproduce this,” explains the researcher.

To generate a credible simulation, “you first need to understand the preventive evacuation process,” underlines Orengo, specifying the need “to identify the actors involved, such as citizens and groups, as well as the infrastructure, such as buildings and traffic routes, in order to produce a model to act as a foundation to develop the digital simulation”. As part of his work, the PhD student analyzed INSEE databases to try and reproduce the socioeconomic characteristics of the Alès population. Orengo used a specialized platform for building agent simulations to create his own. “This platform allows researchers without computer programming training to create models, controlling various parameters that they define themselves,” explains the doctoral student. One of the limitations of this kind of simulation is computing power, which means only a certain number of variables can be taken into account. According to Orengo, his model still needs many improvements. These include “integrating individual vehicles, public transport, decision-making processes relating to risk management and more detailed reproduction of human behaviors”, he specifies. For Lancel, virtual reality could be an important addition, increasing participants’ immersion in the study, “By placing a participant in a virtual crowd, researchers could observe how they react to certain agents and their environment, which would allow them to refine their research,” she concludes.

Rémy Fauvel

MuTAS, urban mobility

“En route” to more equitable urban mobility, thanks to artificial intelligence

Individual cars represent a major source of pollution. But how can you transition from using your own car when you live far from the city center, in an area with little access to public transport? Andrea Araldo, researcher at Télécom SudParis is undertaking a research project that aims to redesign city accessibility, to benefit those excluded from urban mobility.

The transport sector is responsible for 30% of greenhouse gas emissions in France. And when we look more closely, the main culprit appears clearly: individual cars, responsible for over half of the CO2 discharged into the atmosphere by all modes of transport.

To protect the environment, car drivers are therefore thoroughly encouraged to avoid using their car, instead opting for a means of transport that pollutes less. However, this shift is impeded by the uneven distribution of public transport in urban areas. Because while city centers are generally well connected, accessibility proves to be worse on the whole in the suburbs (where walking and waiting times are much longer). This means that personal cars appear to be the only viable option in these areas.

The MuTAS (Multimodal Transit for Accessibility and Sustainability) project, selected by the National Research Agency (ANR) as part of the 2021 general call for projects, aims to reduce these accessibility inequalities at the scale of large cities. The idea is to provide the keys to offering a comprehensive, equitable and multimodal range of mobility options, combining public transport with fixed routes and schedules with on-demand transport services, such as chauffeured cars or rideshares. These modes of transport could pick up where buses and trains leave off in less-connected zones. “In this way, it is a matter of improving accessibility of the suburbs, which would allow residents to leave their personal car in the garage and take public transport, thereby contributing to reducing pollution and congestion on the roads”, says Andrea Araldo, researcher at Télécom SudParis and head of the MuTAS project, but formerly a driving school owner and instructor!

Improving accessibility without sending costs sky-high

But how can on-demand mobility be combined with the range of public transport, without leading to overblown costs for local authorities? The budget issue remains a central challenge for MuTAS. The idea is not to deploy thousands of vehicles on-demand to improve accessibility, but rather to make public transport more equitable within urban areas, for an equivalent cost (or with a limited increase).

This means that many questions must be answered, while respecting this constraint. In which zones should on-demand mobility services be added? How many vehicles need to be deployed? How can these services be adapted to different times throughout the day? And there are also questions regarding public transport. How can bus and train lines be optimized, to efficiently coordinate with on-demand mobility? Which are the best routes to take? Which stations can be eliminated, definitively or only at certain times?

To resolve this complex optimization issue, Araldo and his teams have put forward a strategy using artificial intelligence, in three phases.

Optimizing a graph…

The first involves modeling the problem in the form of a graph. In this graph, the points correspond to bus stops or train stations, with each line represented by a series of arcs, each with a certain trip time. “What must be noted here is that we are only using real-life, public data,” emphasizes Araldo. “Other research has been undertaken around these issues, but at a more abstract level. As part of MuTAS, we are using openly available, standardized data, provided by several cities around the world, including routes, schedules, trip times etc., but also population density statistics. This means we are modeling real public transport systems.” On-demand mobility is also added to the graph in the form of arcs, connecting less accessible areas to points in the network. This translates the idea of allowing residents far from the city center to get to a bus or train station using chauffeured cars or rideshares.

L’attribut alt de cette image est vide, son nom de fichier est Schema-1024x462.png.
To optimize travel in a certain area, researchers start by modeling public transport lines with a graph.

…using artificial intelligence

This modeled graph acts as the starting point for the second phase. In this phase, a reinforcement learning algorithm is introduced, a method from the field of machine learning. After several iterations, this is what will determine what improvements need to be made to the network, for example, deactivating stations, eliminating lines, adding on-demand mobility services, etc. “Moreover, the system must be capable of adapting its structure dynamically, according to shifts in demand throughout the day,” adds the researcher. “The traditional transport network needs to be dense and extended during peak hours, but it can contract significantly in off-peak hours, with on-demand mobility taking over for the last kilometers, which is more efficient for lower numbers of passengers.”

And that is not the only complex part. Various decisions influence each other: for example, if a bus line is removed from a certain place, more rideshares or chauffeured car services will be needed to replace it. So, the algorithm applies to both public transport and on-demand mobility. The objective will therefore be to reach an optimal situation in terms of equitable distribution of accessibility.

But how can this accessibility be evaluated? There are multiple methods to do so, but researchers have chosen two adapted methods for graph optimization. The first is a ‘velocity score’, corresponding to the maximum distance that can be traveled from a departure point in a limited time (30 minutes for example). The second is a ‘sociality score’, representing the number of people that one can meet from a specific area, also within a limited time.

In concrete terms, the algorithm will take an indicator as a reference, i.e. a measure of the accessibility for the least accessible place in the area. The aim being to make transport options as equitable as possible, it will aim to optimize this indicator (‘max-min’ optimization), while respecting certain restrictions such as cost. To achieve this, it will make a series of decisions concerning the network, initially in a random way. Then, at the end of each iteration, by analyzing the flow of passengers, it will calculate the associated ‘reward’, the improvement in the reference indicator. The algorithm will then stop when the optimum is reached, or else after a pre-determined period.

This approach will allow it to establish knowledge of its environment, associating each network structure (according to the decisions made) with the expected reward. “The advantage of such an approach is that once the algorithm is trained, the knowledge base can be used for another network,” explains Araldo. “For example, I can use the optimization performed for Paris as a starting point for a similar project in Berlin. This represents a precious time-saver compared to traditional methods used to structure transport networks, in which you have to start each new project from zero.”

Testing results on (virtual) users in Ile-de-France

Lastly, the final phase aims to validate the results obtained using a detailed model. While the models from the first phase aim to reproduce reality, they only represent a simplified version. This is important, given that they will then be used for various iterations, as part of the reinforcement learning process. If they had a very high level of detail, the algorithm would require a huge amount of computing power, or too much processing time.

The third phase therefore involves first delicately modeling the transport network in an urban area (in this case, the Ile-de-France region), still using real-life data, but more detailed this time. To integrate all this information, researchers use a simulator called SimMobility, developed at MIT in a project to which Araldo contributed. The tool makes it possible to simulate the behavior of populations at an individual level, each person represented by an ‘agent’ with their own characteristics and preferences (activities planned during the days, trips to take, desire to reduce walking time or minimize number of changes, etc.). ‎It was based on the work of Daniel McFadden (Nobel Prize for Economics in 2000) and Moshe Ben-Akiva on ‘discrete choice models’, which makes it possible to predict choices between multiple modes of transport.

With the help of this simulator and public databases (socio-demographic studies, road networks, numbers of passengers, etc.), Araldo and his team, in collaboration with MIT, will generate a synthetic population, representing Ile-de-France users, with a calibration phase. Once the model faithfully reproduces reality, it will be possible to submit it to the new optimized transport system and simulate user reactions. “It is important to always remember that it’s only a simulation,” reminds the researcher. “While our approach allows us to realistically predict user behavior, it certainly does not correspond 100% to reality. To get closer, more detailed analysis and deeper collaborations with transport management bodies will be needed.”

Nevertheless, results obtained could serve to support more equitable urban mobility and in time, reduce its environmental footprint. Especially since the rise of electric vehicles and automation could increase the environmental benefits. However, according to Araldo, “electric, self-driving cars do not represent a miracle solution to save the planet. They will only prove to be a truly eco-friendly option as part of a multimodal public transport network.”

Bastien Contreras

digital sovereignty

Sovereignty and digital technology: controlling our own destiny

Annie Blandin-Obernesser, IMT Atlantique – Institut Mines-Télécom

Facebook has an Oversight Board, a kind of “Supreme Court” that rules on content moderation disputes. Digital giants like Google are investing in the submarine telecommunications cable market. France has had to back pedal after choosing Microsoft to host the Health Data Hub.

These are just a few examples demonstrating that the way in which digital technology is developing poses a threat not only to the European Union and France’s economic independence and cultural identity. Sovereignty itself is being questioned, threatened by the digital world, but also finding its own form of expression there.

What is most striking is that major non-European digital platforms are appropriating aspects of sovereignty: a transnational territory, i.e. their market and site where they pronounce norms, a population of internet users, a language, virtual currencies, optimized taxation, and the power to issue rules and regulations. The aspect that is unique to the digital context is based on the production and use of data and control over information access. This represents a form of competition with countries or the EU.

Sovereignty in all its forms being questioned

The concept of digital sovereignty has matured since it was formalized around ten years ago as an objective to “control our own destinies online”. The current context is different to when it emerged. Now, it is sovereignty in general that is seeing a resurgence of interest, or even souverainism (an approach that prioritizes protecting sovereignty).

This topic has never been so politicized. Public debate is structured around themes such as state sovereignty regarding the EU and EU law, economic independence, or even strategic autonomy with regards to the world, citizenship and democracy.

In reality, digital sovereignty is built on the basis of digital regulation, controlling its material elements and creating a democratic space. It is necessary to take real action, or else risk seeing digital sovereignty fall hostage to overly theoretical debates. This means there are many initiatives that claim to be an integral part of sovereignty.

Regulation serving digital sovereignty

The legal framework of the online world is based on values that shape Europe’s path, specifically, protecting personal data and privacy, and promoting general interest, for example in data governance.

The text that best represents the European approach is the General Data Protection Regulation (GDPR), adopted in 2016, which aims to allow citizens to control their own data, similar to a form of individual sovereignty. This regulation is often presented as a success and a model to be followed, even if it has to be put in perspective.

New European digital legislation for 2022

The current situation is marked by proposed new digital legislation with two regulations, to be adopted in 2022.

It aims to regulate platforms that connect service providers and users or offer services to rank or optimize content, goods or services offered or uploaded online by third parties: Google, Meta (Facebook), Apple, Amazon, and many others besides.

The question of sovereignty is also present in this reform, as shown by the debate around the need to focus on GAFAM (Google, Amazon, Facebook, Apple and Microsoft).

On the one hand, the Digital Markets Act (the forthcoming European legislation) includes strengthened obligations for “gatekeeper” platforms, which intermediate and end-users rely on. This affects GAFAM, even if it may be other companies that are concerned – like Booking.com or Airbnb. It all depends on what comes out of the current discussions.

And on the other hand, the Digital Services Act is a regulation for digital services that will structure the responsibility of platforms, specifically in terms of the illegal content that they may contain.

Online space, site of confrontation

Having legal regulations is not enough.

“The United States have GAFA (Google, Amazon, Facebook and Apple), China has BATX (Baidu, Alibaba, Tencent and Xiaomi). And in Europe, we have the GDPR. It is time to no longer depend solely on American or Chinese solutions!” declared French President Emmanuel Macron during an interview on December 8 2020.

Interview between Emmanuel Macron and Niklas Zennström (CEO of Atomico). Source: Atomico on Medium.

The international space is a site of confrontation between different kinds of sovereignty. Every individual wants to truly control their own digital destiny, but we have to reckon with the ambition of countries that demand the general right to control or monitor their online space, such as the United States or China.

The EU and/or its member states, such as France, must therefore take action and promote sovereign solutions, or else risk becoming a “digital colony”.

Controlling infrastructure and strategic resources

With all the focus on intermediary services, there is not enough emphasis placed on the industrial dimension of this topic.

And yet, the most important challenge resides in controlling vital infrastructure and telecommunications networks. The question of submarine cables, used to transfer 98% of the world’s digital data, receives far less media attention than the issue of 5G devices and Huawei’s resistance. However, it demonstrates the need to promote our cable industry in the face of the hegemony of foreign companies and the arrival of giants such as Google or Facebook in the sector.

The adjective “sovereign” is also applied to other strategic resources. For example, the EU wants to secure its supply of semi-conductors, as currently, it depends on Asia significantly. This is the purpose of the European Chips Act, which aims to create a European ecosystem for these materials. For Ursula von der Leyen, “it is not only a question of competitiveness, but also of digital sovereignty.”

There is also the question of a “sovereign” cloud, which has been difficult to implement. There are many conditions required to establish sovereignty, including the territorialization of the cloud, trust and data protection. But with this objective in mind, France has created the label SecNumCloud and announced substantial funding.

Additionally, the adjective “sovereign” is used to describe certain kinds of data, for which states should not depend on anyone for their access, such as geographic data. In a general way, a consensus has been reached around the need to control data and access to information, particularly in areas where the challenge of sovereignty is greatest, such as health, agriculture, food and the environment. Development of artificial intelligence is closely connected to the status of this data.

Time for alternatives

Does all that mean facilitating the emergence of major European or national actors and/or strategic actors, start-ups and SMEs? Certainly, such actors will still need to show good intentions, compared to those that shamelessly exploit personal data, for example.

A pure alternative is difficult to bring about. This is why partnerships develop, although they are still highly criticized, to offer cloud hosting for example, like the collaboration between Thales and OVHcloud in October 2021.

On the other hand, there is reason to hope. Open-source software is a good example of a credible alternative to American private technology firms. It needs to be better promoted, particularly in France.

Lastly, cybersecurity and cyberdefense are critical issues for sovereignty. The situation is critical, with attacks coming from Russia and China in particular. Cybersecurity is one of the major sectors in which France is greatly investing at present and positioning itself as a leader.

Sovereignty of the people

To conclude, it should be noted that challenges relating to digital sovereignty are present in all human activities. One of the major revelations occurred in 2005, in the area of culture, when Jean-Noël Jeanneney observed that Google had defied Europe by creating Google Books and digitizing the continent’s cultural heritage.

The recent period reconnects with this vision, with cultural and democratic issues clearly essential in this time of online misinformation and its multitude of negative consequences, particularly for elections. This means placing citizens at the center of mechanisms and democratizing the digital world, by freeing individuals from the clutches of internet giants, whose control is not limited to economics and sovereignty. The fabric of major platforms is woven from the human cognitive system, attention and freedom. Which means that, in this case, the sovereignty of the people is synonymous with resistance.

Annie Blandin-Obernesser, Law professor, IMT Atlantique – Institut Mines-Télécom

This article was republished from The Conversation under the Creative Commons license. Read the original article here (in French).

privacy, data protection regulation

Privacy as a business model

The original version of this article (in French) was published in quarterly newletter no 22 (October 2021) from the Chair “Values and Policies of Personal Information”.

The usual approach

The GDPR is the most visible text on this topic. It is not the oldest, but it is at the forefront for a simple reason: it includes huge sanctions (up to 4% of consolidated international group turnover for companies). Consequently, this regulation is often treated as a threat. We seek to protect ourselves from legal risk.

The approach is always the same: list all data processed, then find a legal framework that allows you to keep to the same old habits. This is what produces the long, dry texts that the end-user is asked to agree to with a click, most often without reading. And abracadabra, a legal magic trick – you’ve got the user’s consent, you can continue as before.

This way of doing things poses various problems.

  1. It implies that privacy is a costly position, a risk, that it is undesirable. Communication around the topic can create a disastrous impression. The message on screen says one thing (in general, “we value your privacy”), while reality says the opposite (“sign the 73-page-long contract now, without reading it”). The user knows very well when signing that everyone is lying. No, they haven’t read it. And no, nobody is respecting their privacy. It is a phony contract signed between liars.
  2. The user is positioned as an enemy. Someone who you need to get to sign a document, more or less forced, in which they undertake not to sue, is an enemy. It creates a relationship of distrust with the user.

But we could see these texts with a completely different perspective if we just decided to change our point of view.

Placing the user at the center

The first approach means satisfying the legal team (avoiding lawsuits) and the IT department (a few banners and buttons to add, but in reality nothing changes). What about trying to satisfy the end user?

Let us consider that privacy is desirable, preferable. Imagine that we are there to serve users, rather than trying to protect ourselves from them.

We are providing a service to users, and in so doing, we process their personal data. Not everything that is available to us, but only what is needed for said service. Needed to satisfy the user, not to satisfy the service provider.

And since we have data about the user, we may as well show it to them, and allow them to take action. By displaying things in an understandable way, we create a phenomenon of trust. By giving power back to the user (to delete and correct, for example) we give them a more comfortable position.

You can guess what is coming: by placing the user back in the center, we fall naturally and logically back in line with GDPR obligations.

And yet, this part of the legislation is far too often misunderstood. The GDPR allows for a certain number of cases under which it is authorized to manipulate personal user data. Firstly, upon their request, to provide the service that is being sought. Secondly, for a whole range of legal obligations. Thirdly, for a few well-defined exceptions (research, police, law, absolute emergency, etc.). And finally, if there really is no good reason, you have to ask explicit consent from the user.

If we are asking the user’s consent, it is because we are in the process of damaging their privacy in a way that is not serving them. Consent is not the first condition of all personal data processing. On the contrary, it is the last. If there really is no legitimate motive, permission must be asked before processing the data.

Once this point has been raised, the key objection remains: the entire economic model of the digital world involves pillaging people’s private lives, to model and profile them, sell targeted advertising for as much money as possible, and predict user behavior. In short, if you want to exist online, you have to follow the American model.

Protectionism

Let us try another approach. Consider that the GDPR is a text that protects Europeans, imposing our values (like respect of privacy) in a world that ignores them. The legislation tells us that companies that do not respect these values are not welcome in the European Single Market. From this point of view, the GDPR has a clear protectionist effect: European companies respect the GDPR, while others do not. A European digital ecosystem can come into being with protected access to the most profitable market in the world.

From this perspective, privacy is seen as a positive thing for both companies and users. A bit like how a restaurant owner handles hygiene standards: a meticulous, serious approach is needed, but it is important to do so to protect customers, and it is in their interest to have an exemplary reputation. Furthermore, it is better if it is mandatory, so that the bottom-feeders who don’t respect the most basic rules disappear from the market.

And here, it is exactly the same mechanism. Consider that users are allies and put them back in the center of the game. If we have data on them, we may as well tell them, show them, and so on.

Here, a key element enters in play. Because, as long as Europe’s digital industry remains stuck on the American model and rejects the GDPR, it is in the opposite position. The business world does not like to comply with standards when it does not understand their utility. It debates with inspecting authorities to request softer rules, delays, adjustments, exceptions, etc. And so, it asks that the weapon created to protect European companies be disarmed and left on standby.

It is a Nash equilibrium. It is in the interest of all European companies to use the GDPR’s protectionist aspect to their advantage, but each believes that if they are the first, then they will lose out to those who do not respect the standards. Normally, to get out of this kind of toxic equilibrium, it takes a market regulation initiative. Ideally, a concerted effort to stimulate movement in the right direction. For now, the closest thing to a regulatory initiative are the increasingly high sanctions being dealt out all over Europe.

Standing out from the crowd

Of course, the digital reality of today is often not that simple. Data travels, changes hands, collected in one place but exploited in another. To successfully show users the processing of their data, often many things need to be reworked. The process needs to be focused on the end user rather than on the activity.

And even so, there are some cases where this kind of transparent approach is impossible. For example, the data that is collected to be used for targeted ad profiling. This data is nearly always transmitted to third parties, to be used in ways that are not in direct connection with the service that the user subscribed to. This is the typical use-case for which we try to obtain user consent (without which the processing is illegal) but where it is clear that transparency is impossible and informed consent is unlikely.

Two major categories are taking shape. The first includes digital services that can place the user at the center, and present themselves as allies, demonstrating a very high level of transparency. And the second represents digital services that are incapable of presenting themselves as allies.

So clearly, a company’s position on the question of privacy can be a positive feature that sets them apart. By aiming to defend user interests, we improve compliance with regulation, instead of trying to comply without understanding. We form an alliance with the user. And that is precisely what changes everything.

Benjamin Bayart

David Gesbert, winner of the 2021 IMT-Académie des Sciences Grand Prix

EURECOM researcher David Gesbert is one of the pioneers of Multiple-Input Multiple-Output (MIMO) technology, used nowadays in many wireless communication systems. He contributed to the boom in WiFi, 3G, 4G and 5G technology, and is now exploring what could be the 6G of the future. In recognition of his body of work, Gesbert has received the IMT-Académie des Sciences Grand Prix.

I’ve always been interested by research in the field of telecommunications. I was fascinated by the fact that mathematical models could be converted into algorithms used to make everyday objects work,” declares David Gesbert, researcher and specialist in wireless telecommunications systems at EURECOM. Since he completed his studies in 1997, Gesbert has been working on MIMO, a telecommunications system that was created in the 1990s. This technology makes it possible to transfer data streams at high speeds, using multiple transmitters and receivers (such as telephones) in conjunction. Instead of using a single channel to send information, a transmitter can use multiple spatial streams at the same time. Data is therefore transferred more quickly to the receiver. This spatialized system represents a breaking point with previous modes of telecommunication, like the Global System for Mobile Communications (GSM).

It has proven to be an important innovation, as MIMO is now broadly used in WiFi systems and several generations of mobile telephone networks, such as 4G and 5G. After receiving his PhD from École Nationale Supérieure des Télécommunications in 1997, Gesbert completed two years of postdoctoral research at Stanford University. He joined the telecommunications laboratory directed by Professor Emeritus Arogyaswami Paulraj, an engineer who worked on the creation of MIMO. In the early 2000s, the two scientists, accompanied by two students, launched the start-up Iospan Wireless. This was where they developed the first high-speed wireless modem using MIMO-OFDM technology.

OFDM: Orthogonal Frequency-Division Multiplexing

OFDM is a process that improves communication quality by dividing a high-debit data stream into many low-debit data streams. By combining this mechanism with MIMO, it is possible to transfer data at high speeds while making the information generated by MIMO more robust against radio distortion. “These features make it great for use in deploying telecommunications systems like 4G or 5G,” adds the researcher.  

In 2001, Gesbert moved to Norway, where he taught for two years as adjunct professor in the IT department at the University of Oslo. One year later, he published an article in which he described that complex propagation environments favor the functioning of MIMO. “This means that the more obstacles there are in a place, the more the waves generated by the antennas are reflected. The waves therefore travel different paths and interference is reduced, which leads to more efficient data transfer. In this way, an urban environment in which there are many buildings, cars, and other objects will be more favorable to MIMO than a deserted area,” explains the telecommunications expert.  

In 2003, he joined EURECOM, where he became a professor and five years later, head of the Mobile Communications department. There, he has continued his work aiming to improve MIMO. His research has shown him that base stations — also known as relay antennas — could be useful to improve the performance of this mechanism. By using antennas from multiple relay stations far apart from each other, it would be possible to make them work together and produce a giant MIMO system. This would help to eliminate interference problems and optimize the circulation of data streams. Research is still being performed at present to make this mechanism usable.

MIMO and robots

In 2015, Gesbert obtained an ERC Advanced Grant for his PERFUME project. The initiative, which takes its name from high PERfomance FUture Mobile nEtworking, is based on the observation that “the number of receivers used by humans and machines is currently rising. Over the next few years, these receivers will be increasingly connected to the network,” emphasizes the researcher. The aim of PERFUME is to exploit the information resources of receivers so that they work in cooperation, to improve their performance. The MIMO principle is at the heart of this project: spatializing information and using multiple channels to transmit data. To achieve this objective, Gesbert and his team developed base stations attached to drones. These prototypes use artificial intelligence systems to communicate between one another, in order to determine which bandwidth to use or where to place themselves to give a user optimal network access. Relay drones can also be used to extend radio range. This could be useful, for example, if someone is lost on a mountain, far from relay antennas, or in areas where a natural disaster has occurred and the network infrastructure has been destroyed.

As part of this project, the EURECOM professor and his team have performed research into decision-making algorithms. This has led them to develop artificial neuron networks to improve decision-making processes performed by the receivers or base stations desired to cooperate together. With these neuron networks, the devices are capable of quantifying and exploiting the information held by each of themAccording to Gesbert, “this will allow receivers or stations with more information to correct flaws in receivers with less. This idea is a key takeaway from the PERFUME project, which finished at the end of 2020. It indicates that to cooperate, agents like radio receivers or relay stations make decisions based on sound data, which sometimes has to be rejected to let themselves be guided by decisions from agents with access to better information than them. It is a surprising result, and a little counterintuitive.”

Towards the 6th generation of mobile telecommunications technology

“Nowadays, two major areas are being studied concerning the development of 6G,” announces Gesbert. The first relates to ways of making networks more energy efficient by reducing the number of times that transmissions take place, by restricting the amount of radio waves emitted and reducing interference. One solution to achieve these objectives is to use artificial intelligence. “This would make it possible to optimize resource allocation and use radio waves in the best way possible,” adds the expert.

The second concerns applications of radio waves for purposes other than communicating information. One possible use for the waves would be to produce images. Given that when a wave is transmitted, it reflects off a large number of obstacles, artificial intelligence could analyze its trajectory to identify the position of obstacles and establish a map of the receiver’s physical environment. This could, for example, help self-driving cars determine their environment in a more detailed way. With 5G, the target precision for locating a position is around a meter, but 6G could make it possible to establish centimeter-level precision, which is why these radio imaging techniques could be useful. While this 6th-generation mobile telecommunications network will have to tackle new challenges, such as the energy economy and high-accuracy positioning, it seems clear that communication spatialization and MIMO will continue to play a fundamental role.

Rémy Fauvel

cybersécurité, attaques informatiques, attacks

Governments, banks, and hospitals: all victims of cyber-attacks

Hervé Debar, Télécom SudParis – Institut Mines-Télécom

Cyber-attacks are not a new phenomenon. The first computer worm distributed via the Internet, known as the “Morris worm” after its creator, infected 10% of the 60,000 computers connected to the Internet at the time.

Published back in 1989, the novel The Cuckoo’s Egg was based on a true story of computer espionage. Since then, there have been any number of malicious events, whose multiple causes have evolved over time. The initial motivation of many hackers was their curiosity about this new technology that was largely out of the reach of ordinary people at the time. This curiosity was replaced by the lure of financial reward, leading firstly to messaging campaigns encouraging people to buy products online, and subsequently followed by denial-of-service attacks.

Over the past few years, there have been three main motivations:

  • Direct financial gain, most notably through the use of ransomware, which has claimed many victims.
  • Espionage and information-gathering, mostly state-sponsored, but also in the private sphere.
  • Data collection and manipulation (normally personal data) for propaganda or control purposes.

These motivations have been associated with two types of attack: targeted attacks, where hackers select their targets and find ways to penetrate their systems, and large-scale attacks, where the attacker’s aim is to claim as many victims as possible over an extended period of time, as their gains are directly proportional to their number of victims.

The era of ransomware

Ransomware is a type of malware which gains access to a victim’s computer through a back door before encrypting their files. A message is then displayed demanding a ransom in exchange for decrypting these files.

Kaseya cash register software

In July 2021, an attack was launched against Kaseya cash register software, which is used by several store chains. It affected the Cloud part of the service and shut down the payment systems of several retail chains.

The Colonial Pipeline attack

One recent example is the attack on the Colonial Pipeline, an oil pipeline which supplies the eastern United States. The attack took down the software used to control the flow of oil through the pipeline, leading to fuel shortages at petrol stations and airports.

This is a striking example because it affected a visible infrastructure and had a significant economic impact. However, other infrastructure – in banks, factories, and hospitals – regularly fall victim to this phenomenon. It should also be noted that these attacks are very often destructive, and that paying the ransom is not always sufficient to guarantee the recovery of one’s files.

Unfortunately, such attacks look set to continue, at least in the short-term, given the financial rewards for the perpetrators: some victims pay the ransom despite the legal and ethical questions this raises. Insurance mechanisms protecting against cyber-crime may have a detrimental effect, as the payment of ransoms only encourages hackers to continue. Governments have also introduced controls on cryptocurrencies, which are often used to pay these ransoms, in order to make payments more difficult. Paradoxically, however, payments made using cryptocurrency can be traced in a way that would be impossible with traditional methods of payment. We can therefore hope that this type of attack will become less profitable and riskier for hackers, leading to a reduction in this type of phenomenon.

Targeted, state-sponsored attacks

Infrastructure, including sovereign infrastructure (economy, finance, justice, etc.), is frequently controlled by digital systems. As a result, we have seen the development of new practices, sponsored either by governments or extremely powerful players, which implement sophisticated methods over an extended time frame in order to attain their objectives. Documented examples include the Stuxnet/Flame attack on Iran’s nuclear program, and the SolarWinds software hack.

SolarWinds

The attack targeting Orion and its SolarWinds software is a textbook example of the degree of complexity that can be employed by certain perpetrators during an attack. As a network management tool, SolarWinds plays a pivotal role in the running of computer systems and is used by many major companies as well as the American government.

The initial attack took place between January and September of 2019, targeting the SolarWinds compilation environment. Between the fall of 2019 and February 2020, the attacker interacted with this environment, embedding additional features. In February 2020, this interaction enabled the introduction of a Trojan horse called “Sunburst”, which was subsequently incorporated into SolarWinds’ updates. In this way, it became embedded in all of Orion’s clients’ systems, infecting as many as 18,000 organizations. The exploitation phase began in late 2020 when further malicious codes downloaded by Sunburst were injected, and the hacker eventually managed to breach the Office365 cloud used by the compromised companies. Malicious activity was first detected in December 2020, with the theft of software tools from the company FireEye.

This has continued throughout 2021 and has had significant impacts, underlining both the complexity and the longevity of certain types of attack. American intelligence agencies believe this attack to be the work of SVR, Russia’s foreign intelligence service, which has denied this accusation. It is likely that the strategic importance of certain targets will lead to future developments of this type of deep, targeted attack. The vital role played by digital tools in the running of our critical infrastructure will doubtless encourage states to develop cyber weapons, a phenomenon that is likely to increase in the coming years.

Social control

Revelations surrounding the Pegasus software developed by NSO have shown that certain countries can benefit significantly from compromising their adversaries’ IT equipment (including smartphones).

The example of Tetris

Tetris is the name given to a tool used (potentially by the Chinese government) to infiltrate online chat rooms and reveal the identities of possible opponents. This tool has been used on 58 sites and uses relatively complex methods to steal visitors’ identities.

“Zero-click” attacks

The Pegasus revelations shed light on what are known as “zero-click” attacks. Many attacks on messaging clients or browsers assume that an attacker will click a link, and that this click will then cause the victim to be infected. With zero-click attacks, targets are infected without any activity on their part. One ongoing example of this hack is the ForcedEntry or CVE-2021-30860 vulnerability, which has affected the iMessage app on iPhones.

Like many others, this application accepts data in a wide range of formats and must carry out a range of complex operations in order to present it to users in an elegant way, despite its reduced display format. This complexity has extended the opportunities for attacks. An attacker who knows a victim’s phone number can send them a malicious message, which will trigger an infection as it is processed by the phone. Certain vulnerabilities even make it possible to delete any traces (at least visible traces) that the message was received, in order to avoid alerting the target.

Despite the efforts to make IT platforms harder to hack, it is likely that certain states and private companies will remain capable of hacking into IT systems and connected objects, either directly (via smartphones, for example) or via the cloud services to which they are connected (e.g. voice assistants). This takes us into the world of politics, and indeed geopolitics.

The biggest problem with cyber-attacks remains identifying the origin of the attack and who was behind it. This is made even more difficult by attackers trying to cover their tracks, which the Internet gives them multiple opportunities to do.

How can you prevent an attack?

The best way of preventing an attack is to install the latest updates for systems and applications, and perhaps ensure that they are installed automatically. The majority of computers, phones and tablets can be updated on a monthly basis, or perhaps even more frequently. Another way is to activate existing means of protection such as firewalls or anti-virus software, which will eliminate most threats.

Saving your work on a regular basis is essential, whether onto a hard drive or in the Cloud, as is disconnecting from these back-ups once they have been completed. Back-up copies are only useful if they are kept separate from your computer, otherwise ransomware will attack your back-up drive as well as your main drive. Backing up twice, or saving key information such as the passwords to your main applications (messenger accounts, online banking, etc.) in paper form, is another must.

Digital tools should also be used with caution. Follow this simple rule of thumb: if it seems too good to be true in the real world, then there is every chance that it is also the case in the virtual world. By paying attention to any messages that appear on our screens and looking out for spelling mistakes or odd turns of phrase, we can often identify unusual behavior on the part of our computers and tablets and check their status.

Lastly, users must be aware that certain activities are risky. Unofficial app stores or downloads of executables in order to obtain software without a license often contain malware. VPNs, which are widely used to watch channels from other regions, are also popular attack vectors.

What should you do if your data is compromised?

Being compromised or hacked is highly stressful, and hackers constantly try to make their victims feel even more stressed by putting pressure on them or by sending them alarming messages. It is crucial to keep a cool head and find a second device, such as a computer or a phone, which you can use to find a tool that will enable you to work on the compromised machine.

It is essential to return to a situation in which the compromised machine is healthy again. This means a full system recovery, without trying to retrieve anything from the previous installation in order to prevent the risk of reinfection. Before recovery, you must analyze your back-up to make that sure no malicious code has been transferred to it. This makes it useful to know where the infection came from in the first place.

Unfortunately, the loss of a few hours of work has to be accepted, and you simply have to find the quickest and safest way of getting up and running again. Paying a ransom is often pointless, given that many ransomware programs are incapable of decrypting files. When decryption is possible, you can often find a free program to do it, provided by security software developers. This teaches us to back up our work more frequently and more extensively.

Finally, if you lack in-house cybersecurity expertise, it is highly beneficial to obtain assistance with the development of an approach that includes risk analyses, the implementation of protective mechanisms, the exclusive use of certified cloud services, and the performance of regular audits carried out by certified professionals capable of detecting and handling cybersecurity incidents.

Hervé Debar, Director of Research and PhDs, Deputy Director of Télécom SudParis.

This article has been republished from The Conversation under a Creative Commons licence. Read the original article (in French).

Technologie positive, stress

Can technology combat chronic stress?

Stressors in individuals can occur on a regular basis, especially in uncertain contexts such as the current health situation. To prevent a state of stress from becoming chronic and causing mental health problems, approaches involving positive technologies could help people to improve their resilience. Anuragini Shirish, a researcher at Institut Mines-Télécom Business School, describes her work on this subject.

Why is it important to reduce stress in people in general?

Anuragini Shirish: According to the latest estimates in 2017, 792 million people worldwide are diagnosed with mental health problems, 284 million and 264 million of whom reportedly suffer from anxiety and depression respectively. The physiological state of chronic stress is a major risk factor for the development of these diseases. Avoiding – or at least limiting – this state of chronic stress in individuals could therefore significantly reduce the risk of developing these diseases and improve their living conditions in general.

How do people develop a state of chronic stress?

AS: We have made great strides in our understanding of the mechanisms that induce stress. Stress was formerly thought to be caused by repeated exposure to stressors, but now – especially in light of evolutionary neurobiology theories – stress is generally considered to be a default response to dangerous situations, which is inhibited by the prefrontal cortex when people perceive a sense of security. The recent “Generalized Uncertainty Theory of Stress” states that stress originates from a feeling of permanent insecurity in individuals.

How has the COVID-19 pandemic influenced individual and collective situations of chronic stress?

AS: The COVID-19 pandemic has triggered a general feeling of insecurity in many aspects, including one’s own health and that of one’s loved ones, financial stability and job security. Many people have been affected by situations of chronic stress, which has led to a significant increase in mental illnesses. Uncertainty and stress drive people to seek out responses. However, the information they find is sometimes inadequate and may even be dangerous at individual and collective levels. It is therefore important to consider how to guide these responses, especially in the context of the pandemic.

Are you suggesting the use of technology to reduce stress in a holistic way?

AS: “Positive” technology sets out to improve individual and collective living conditions. In this case, such technology can be designed to improve people’s mental states. There are several types of positive technology, many of which now consist of mobile applications, which means that they can be made available to a large portion of the population.

In concrete terms, what technological tools could help to reduce stress?

AS: This is precisely the purpose of the analysis we are seeking to provide. We have defined three types of stress-response behaviors. Certain behaviors may be favored, depending on the individuals concerned and the context.

“Hedonic” behavior seeks to reduce stress through an immediate distraction. The aim is to enjoy a brief moment of pleasure. Positive hedonic technologies provide a very rapid response to stress. Examples include video games and television series. However, their stress-reducing effects are generally short-lived. Such solutions are of fleeting benefit and generally teach people very little about how to limit their future stress.

“Social” behavior reduces stress through social interaction. Its effects last longer than hedonic behavior because people can share their emotions, help and advise each other with regard to common goals. However, the benefits remain temporary. During lockdowns, meetings of friends or family by videoconference were examples of how social positive technology facilitated responses to individual and group stress.

“Eudaimonic” behavior is related to the search for meaning and purpose. It is based on the principle of personal growth and development and helps to develop a better response to stress over time. This type of behavior is also the most difficult to master, as it requires a more substantial investment in terms of time and energy, and we would like to see positive technology increasingly used in this area. Facilitating access to eudaimonic behaviors could promote better ways to combat stress and mental health problems on the societal level.

How does a positive eudaimonic technology work?

AS: Positive eudaimonic technologies may be based on different approaches. For example, many current applications provide support for meditation, whose mental health benefits are now widely accepted. Applications related to a learning process involving personal achievement can be considered as eudaimonic technologies. We can also develop technologies for initially hedonic or social purposes, in order to facilitate access to them, which may then be used for eudaimonic purposes in a subsequent phase. The recent Heartintune application is an example of this type of approach.

What are the prospects for the development of positive technologies at the societal level?

AS: Various types of positive technologies already exist, and our next challenge is to promote their development and widespread use in order to boost resilience. We believe that the best way to do this is to use technology to promote more eudaimonic behaviors.

This could be a particularly important issue to raise at the World Health Summit in Berlin at the end of October 2021, which will focus on issues including the potential contributions of innovations and technology to the resolution of health problems.

Antonin Counillon

zero-click attacks

Zero-click attacks: spying in the smartphone era

Zero-click attacks exploit security breaches in smartphones in order to hack into a target’s device without the target having to do anything. They are now a threat to everyone, from governments to medium-sized companies.

“Zero-click attacks are not a new phenomenon”, says Hervé Debar, a researcher in cybersecurity at Télécom SudParis. “In 1988 the first computer worm, named the “Morris worm” after its creator, infected 6,000 computers in the USA (10% of the internet at the time) without any human intervention, causing damage estimated at several million dollars.” By connecting to messenger servers which were open access by necessity, this program exploited weaknesses in server software, infecting it. It could be argued that this was one of the very first zero-click attacks, a type of attack which exploits security breaches in target devices without the victim having to do anything.

There are two reasons why this type of attack is now so easy to carry out on smartphones. Firstly, the protective mechanisms for these devices are not as effective as those on computers. Secondly, more complex processes are required in order to present videos and images, meaning that the codes enabling such content to be displayed are often more complex than those on computers. This makes it easier for attackers to hack in and exploit security breaches in order to spread malware. As Hervé Debar explains, “attackers must, however, know certain information about their target – such as their mobile number or their IP address – in order to identify their phone. This is a targeted type of attack which is difficult to deploy on a larger scale as this would require collecting data on many users.”

Zero-click attacks tend to follow the same pattern: the attacker sends a message to their target containing specific content which is received in an app. This may be a sound file, an image, a video, a gif or a pdf file containing malware. Once the message has been received, the recipient’s phone processes it using apps to display the content without the user having to click on it. While these applications are running, the attacker exploits breaches in their code in order to run programs resulting in spy software being installed on the target device, without the victim knowing.

Zero-days: vulnerabilities with economic and political impact

Breaches exploited in zero-click attacks are known as “zero-days”, vulnerabilities which are unknown to the manufacturer or which have yet to be corrected. There is now a global market for the detection of these vulnerabilities: the zero-day market, which is made up of companies looking for hackers to identify these breaches. Once the breach has been identified, the hacker will produce a document explaining it in detail, with the company who commissioned the document often paying several thousand dollars to get their hands on it. In some cases the manufacturer themselves might buy such a document in an attempt to rectify the breach. But it may also be bought by another company looking to sell the breach to their clients – often governments – for espionage purposes. According to Hervé Debar, between 100 and 1,000 vulnerabilities are detected on devices each year. 

Zero-click attacks are regularly carried out for theft or espionage purposes. For theft, the aim may be to validate a payment made by the victim in order to divert their money. For espionage, the goal might be to recover sensitive data about a specific individual. The most recent example was the Pegasus affair, which affected around 50,000 potential victims, including politicians and media figures. “These attacks may be a way of uncovering secret information about industrial, economic or political projects. Whoever is responsible is able to conceal themselves and to make it difficult to identify the origin of the attack, which is why they’re so dangerous”, stresses Hervé Debar. But it is not only governments and multinationals who are affected by this sort of attack – small and medium-sized companies are too. They are particularly vulnerable in that, owing to a lack of financial resources, they don’t have IT professionals running their systems, unlike major organisations.

Also read on I’MTech Cybersecurity: high costs for companies

More secure computer languages

But there are things that can be done to limit the risk of such attacks affecting you. According to Hervé Debar, “the first thing to do is use your common sense. Too many people fall into the trap of opening suspicious messages.” Personal phones should also be kept separate from work phones, as this prevents attackers from gaining access to all of a victim’s data. Another handy tip is to back up your files onto an external hard drive. “By transferring your data onto an external hard drive, it won’t only be available on the network. In the event of an attack, you will safely be able to recover your data, provided you disconnected the disc after backing up.” To protect against attacks, organisations may also choose to set up intrusion detection systems (IDS) or intrusion prevention systems (IPS) in order to monitor flows of data and access to information.

In the fight against cyber-attacks, researchers have developed alternative computing languages. Ada, a programming language which dates back to the 1980s, is now used in the aeronautic industry, in railways and in aviation safety. For the past ten years or so the computing language Rust has been used to solve problems linked to the management of buffer memory which were often encountered with C and C++, languages widely used in the development of operating systems. “These new languages are better controlled than traditional programming languages. They feature automatic protective mechanisms to prevent errors committed by programmers, eliminating certain breaches and certain types of attack.” However, “writing programs takes time, requiring significant financial investment on the part of companies, which they aren’t always willing to provide. This can result in programming errors leading to breaches which can be exploited by malicious individuals or organisations.”

Rémy Fauvel