Anne-Sophie Taillandier: New member of the Academy of technologies

Director of Teralab, IMT’s Big Data and AI platform, since 2015, Anne-Sophie Taillandier was elected member of the Academy of technologies in March 2022. This election is in recognition of her work developing projects on data and artificial intelligence at national and European level.

Newly elected to the Academy of Technologies, Anne-Sophie Taillandier has been Director of Teralab for seven years, a platform created by IMT in 2012 that specializes in Big Data and Artificial Intelligence. Anne-Sophie Taillandier was drawn towards a scientific occupation as she “always found mathematics enjoyable,” she says. “This led me to study science, first in an engineering school, at CentraleSupélec, and then to complete a doctoral thesis in Applied Mathematics at the ENS, which I defended in 1998,” she adds.

Once her thesis in Artificial Intelligence was completed, she joined Dassault Systèmes. “After my thesis, I wanted to see an immediate application of the things I had learned, so I joined Dassault Systèmes where I held various positions,” says Anne-Sophie Taillandier. During the ten years she spent at the well-known company, she contributed to the development of modeling software, worked in Human Resources, and led the Research & Development department of the brand Simulia. In 2008, she moved to an IT security company, and then in 2012 became Director of Technology at LTU Technologies, an image recognition software company, until 2015, when she took over the management of Teralab at IMT.

“It was the opportunity to work in a wide variety of fields while focusing on data, machine learning, and its applications that prompted me to join Teralab,” says Anne-Sophie Taillandier. Working with diverse companies requires “understanding a profession to grasp the meaning of the data that we are manipulating”. For the Director of Teralab, this experience mirrored that of her thesis, during which she had to understand the meaning of data provided by automotive engineers in order to manipulate it appropriately.

Communicating and explaining

In the course of her career, Anne-Sophie Taillandier realized “that there were language barriers, that there were sometimes difficulties in understanding each other”. She has taken a particular interest in these problems. “I’ve always found it interesting to take an educational approach to explain our work, to try to hide the computational and mathematical complexity in simple language,” says the Teralab director. “Since its inception, Teralab has aimed to facilitate the use of sophisticated technology, and to understand the professions of people who hold the data,” she says.

Teralab positions itself as an intermediary between companies and researchers so that they may understand each other and cooperate. In this project, it is necessary to make different disciplines work together. A technology watch is also important to remain up to date with the latest innovations, which can be better suited to a client’s needs. In addition, Teralab has seen new issues arise during its eight years of existence.

“We realized that the users who came to us in the beginning wanted to work on their own data, whereas today they want to work in an ecosystem that allows the circulation of their data. This raises issues of control over the use of their data, as well as of architecture and exchange standards,” points out Anne-Sophie Taillandier. The pooling of data held by different companies raises issues of confidentiality, as they may be in competition on certain points.  

European recognition

At TeraLab, we asked ourselves about data sharing between companies, which led us to the Gaia-X initiative”. In this European association, Teralab and other companies participate in the development of services to create a ‘cloud federation’. This is essential as a basis for enabling the flow of data, interoperability, and avoiding confining companies to ‘cloud’ solutions. Europe’s technological independence depends on these types of regulations and standards. Not only would companies be able to protect their digital assets and make informed choices, but they would also be able to share information with each other, under suitable conditions according to the sensitivity of their data.

In the development of Gaia-X federated services and the creation of data spaces, Teralab provides its technological and human resources to validate architecture, to prototype new services on sector-specific data spaces, and to build the open-source software layer that is essential to this development. “If EDF or another critical infrastructure, like banking, wants to be able to move sensitive data into these data spaces, they will need both technical and legal guarantees.”.

Teralab, since the end of the public funding that it received until 2018, has not stopped growing, especially at European level. “We currently have a European project on health-related data on cardiovascular diseases,” says the Teralab director. The goal is for researchers in European countries who need data on these diseases to be able to conduct research via a DataHub – a space for sharing data. In the future, Teralab’s goal is to continue its work in cloud federation and to “become a leading platform for the creation of digital ecosystems,” says Anne-Sophie Taillandier.

Rémy Fauvel

All aboard for security

With France’s rail transport market opening up to competition, the SNCF’s security work is also becoming a service. This transformation raises questions on how security as an activity is organized. Florent Castagnino, Sociology researcher at IMT Atlantique, has studied how this service can adapt.  

2021 saw private train companies newly authorized to operate on French rails, previously monopolized by the SNCF. As well as opening its rail system to the competition, the state-owned company plans to offer security services to other companies. In the railway sector, “security is defined as the prevention of antisocial acts such as theft, fraud, attacks and assaults, whereas safety relates to the prevention of accidents such as issues with points or signals,” indicates Florent Castagnino, sociology researcher at IMT Atlantique.

While security is a preventive activity, it is also a commercial one. With the market opening up to competition, security services are sure to also become a profitable venture. This raises the question of whether a company prepared to pay more than another could obtain better security provision for its journeys and routes. Furthermore, with security guards in train stations, a company will not only regulate acts of malevolence but also reassure passengers. “Even if the trains are secure, a company may wish for agents to patrol the platform or on the train to enhance its brand image,” states Castagnino.

However the sale of security services to competing companies is a challenge for the distribution of agents across France. While certain stations or regional train lines may wish to purchase such services, they might not have access if there is too much demand from other companies or in other regions. Even if the SNCF security department is one of the departments that increases its workforce most regularly, “the question arises of how decisions will be made,” explains the researcher.

Representing complex problems

The distribution of personnel across the country represents a challenge due to the limited number of agents, but it also shows that delinquency is handled purely geographically. Analysis of databases of police reports and calls to emergency services from railway patrols and workers reveals the frequency of malicious acts, their nature and the place in which they occur. Using this information, patrols are sent to stations with the highest number of criminal acts reported.

Typically, if there are more malicious acts reported on line A than line B, more agents will be sent to line A. In this case, database analysis automatically ignores the multiple, complex origins of delinquency, focusing only on the phenomenon’s geographic aspect. As causes of delinquency are considered external to railway companies, they cannot take action as easily as for safety issues, which often have causes considered internal. This means that they are simpler to identify and resolve.

For Castagnino, making use of “databases for delinquency prevention means we imitate the way we handle accidental problems”. From the 90s, “there was a desire to make security more concrete, partly by using a model for the way in which we manage accidents,” continues the researcher.“This can be explained by the decision to apply the methods that work in one area, here safety, to another, in this case, security,” he adds. In the case of safety, if a technical fault such as a signaling failure is regularly reported on a certain kind of equipment, maintenance agents will be sent to repair the traffic lights on the railways concerned, and a general servicing of the equipment may be ordered. For security, if there is a station with many incidents reported, agents from the security department may be sent to the site to address the delinquency problem.

Regulatory surveillance

Most of the time, agents perform rounds to control delinquency. Their simple presence is generally enough to calm potentially disruptive groups. In train stations, “security guards self-regulate and expect social groups to do the same,” explains Castagnino. In a way, they serve as an anchor for groups to control themselves. If that does not work, the security forces can intervene to regulate them. The young researcher calls this process ‘regulatory surveillance’.

If for example, in a station, one or several individuals from a group start to bother someone, the other members of the group will often return them to order, in particular to maintain their collective right to remain in the station, which is seen as an important relational link. Regulatory surveillance also concerns military security forces. They sometimes operate in the same stations regularly, which means they get to know the groups that hang around inside. If a new agent is tempted to act aggressively without a clear reason, their colleagues (who know the place) can dissuade them, explaining that their intentions are disproportionate in relation to the group’s actions. This kind of relationship makes it possible to preserve good relations between agents and civilians.

In recent decades, several terrorist attacks in trains (Madrid in 2004, London in 2005, Thalys in 2015) have raised the question of introducing airport safety measures to the rail system. In particular, SNCF has brought in certain practices used in airports, such as the use of metal detectors in certain stations (particularly for Thalys trains). France’s national railway company is constantly working on an approach to the objectification of security threats, and seeking to make use of advantages provided by new technological tools.

Rémy Fauvel

[box type=”shadow” align=”” class=”” width=””]

Improving surveillance through automatic recognition?

Projects are currently underway to equip surveillance cameras with automatic image processing software. This would allow them to recognize suspicious behavior. “For now, these techniques are sub-optimal,” points out Castagnino. Certain cameras “don’t work or don’t have good video quality, specifically because they are too old,” indicates the researcher. To implement this technology, the camera fleet will therefore need to be updated.

[/box]

[box type=”shadow” align=”” class=”” width=””]

Challenges to the SNCF in the 21st century

Castagnino’s research on rail systems is published in “La SNCF à l’épreuve du XXIe siècle”, a collective work that discusses shifts in the French railway system and its recent evolution from a historic perspective.

[/box]

IN-TRACKS: tracking energy consumption

Nowadays, companies are encouraged to track their energy consumption, specifically to limit their greenhouse gas emissions. IN-TRACKS, a start-up created last November, has developed a smart dashboard to visualize energy usage.

With rising energy costs, these days it is useful for companies and individuals to identify their excessive uses of energy,” says Léo-Paul Keyser, co-founder of In-Tracks.  In November 2021, the IMT Nord Europe graduate created the start-up with Stéphane Flandre, former Enedis executive. The young company, incubated at IMT Nord Europe, provides monitoring solutions to companies to track their energy consumption via a digital dashboard. The software, which can be accessed via computer, smartphone or tablet, displays a range of information pertaining to energy usage.

Typically, graphics show consumption according to peak and off-peak times, depending on the surface area of a room, house or building. “A company may have several premises for which it wishes to track energy consumption,” says Keyser. “The dashboard provides maps and filters that make it possible to select regions, sites, or kinds of equipment for which the customer wishes to compare energy performance,” he adds.

For advice and clarification, “clients can request a videocall so we can help them interpret and understand their data, with our engineers’ expertise,” continues the start-up’s co-founder. The young company also wishes to develop a system of recommendations based on artificial intelligence, analyzing consumption data and sending advice on the dashboard.

The load curve: a tool for analysis

The specificities of clients’ energy use are defined based on load curves: graphics generated by smart electricity and gas meters, which show evolutions in energy consumption over a set period. To obtain access to these curves, In-Tracks has created an agreement with EDF and Enedis. “With the data from the meters received every hour or five minutes, we can try to track the client’s consumption in real-time,” explains the young entrepreneur.

The shorter the intervals at which data is received from the meter, typically every few seconds rather than every few minutes, the more detailed the load curve will be and the more relevant the analysis in diagnosing energy use. By comparing a place’s load curve values to its expected energy consumption threshold, excessive uses can be identified.

Thanks to the dashboard, clients can identify sub-optimal zones and equipment. They can therefore not only reduce their energy bills but also their carbon footprint, i.e. their greenhouse gas emissions, which are also evaluated by the start-up. To do so, the company bases itself on the quantity of fossil energies used, for instance, with natural gas or biodiesel. “We plan to use programs that make it possible to track variations in the energy mix in real-time, in order to get our results as close as possible to reality,” Keyser remarks.

An application accessible to individuals

The young company plans to make an application available to individuals, allowing them to visualize consumption data from smart meters. The software will display the same data as for companies, specifically, energy consumption over time, according to the surface area. In-Tracks also wishes to add a fun side to its application.

“For example, we would like to set up challenges between friends and family members, for everyone to reduce their energy consumption,” explains Keyser. “The aim is to make the subject of energy consumption something that’s fun rather than a source of restrictions,” he adds. To develop this aspect, the start-up is working with students from IMT Nord Europe. The young company is also undertaking research into the Internet of Things, in order to create data analysis methods that make it possible to identify energy issues even more specifically.

Rémy Fauvel

Preparing the networks of the future

Whether in the medical, agricultural or academic sectors, the Intelligence of Things will become a part of many areas in society. However at this point, this sector, sometimes known as Web 3.0, faces many challenges. How can we make objects communicate with each other, no matter how different they might be? With the Semantic Web, a window and a thermometer can be connected, or a CO2 sensor and a computer. Research in this area of the web also aims to open up the technological borders that separate networks, just like research around the Open RAN. This network architecture, based on free software, could put an end to the domination of the small number of equipment manufacturers that telecommunications operators rely on.

However, the number of devices accessing networks is constantly rising. This trend risks making the movement of data more complicated and generating interference, or information jamming phenomena, which are caused in particular by the number of connected devices. By exploring the nature of different kinds of interference and their unique features, we can deal with them and limit them more efficiently.

Furthermore, interference also occurs in alternative telecommunications systems, like Non-Orthogonal Multiple Access (NOMA). While this system makes it possible to host more users on networks, as frequency sub-bands are shared better, interference still presents an intrinsic problem. All these challenges must be overcome for networks to be able to interconnect efficiently and facilitate data-sharing between consumers, data processors, storage centers and authorities in the future.

NOMA

Better network-sharing with NOMA

The rise in the number of connected devices will lead to increased congestion of frequencies available for data circulation. Non-Orthogonal Multiple Access (NOMA) is one of the techniques currently being studied to improve the hosting capacity of networks and avoid their saturation.

To access the internet, a mobile phone must exchange information with base stations, devices commonly known as relay antennas. These data exchanges operate on frequency sub-bands, channels specific to each base station. To host multiple connected device users, a channel is attributed to each user. With the rise in the number of connected objects, there will not be enough sub-bands available to host them all.

To mitigate this problem, Catherine Douillard and Charbel Abdel Nour, telecommunications researchers at IMT Atlantique, have been working on NOMA: a system that places multiple users on the same channel, unlike the current system. “Rather than allocating a frequency band to each user, device signals are superposed on the same frequency band,” explains Douillard.

Sharing resources

The essential idea of NOMA involves making a single antenna work to serve multiple users at the same time,” says Abdel Nour. And to go even further, the researchers are working on the Power-Domain NOMA, “an approach that aims to separate users sharing the same frequency on one or more antennas, according to their transmitting power,” continues Douillard. This system provides more equitable access to spectrum resources and available antennas across users. Typically, when a device encounters difficulties in accessing the network, it may try to access a resource already occupied by another user. However, the antenna transmitting power will be adapted so that the information sent by the device successfully arrives at its destination, while limiting ‘disturbances’ for the user.

Superposing multiple users on the same resource will cause problems in accessing it. For communication to work, the signals sent by the machine need to be received at sufficiently different strengths, so that the antennas can identify them. If the signal strengths are similar, the antennas will mix them up. This can cause interference, in other words, information jamming phenomena, which can hinder the smooth playing of a video or a round of an online game.

Interference: an intrinsic problem for NOMA

To avoid interference, receivers are fitted with decoders, which differentiate between signals according to their reception quality. When the antenna receives the signals, it identifies the one with the best reception quality and proceeds to extract the signal received. It will then recover the lower-quality signal. Once the signals are identified, the base station gives each one access to the network. “This means of handling interference is quite simple to implement in the case of two signals, but much less so when there are many,” states Douillard.

 “To handle interference, there are two main possibilities,” explains Abdel Nour. “One involves canceling the interference, or in other words, the device receivers detect which signals are not intended for them and eliminate them, keeping only those sent to them,” adds the researcher. This approach can be facilitated by interference models, namely those studied at IMT Nord Europe. The second solution involves making the antennas work together. By exchanging information about the quality of connections, they can implement algorithms to determine which devices should be served by NOMA, while avoiding interference from their signals.

Intelligent allocation

We are trying to ensure that resource allocation techniques adapt to user needs, while adjusting the power they need, with no excess,” states Abdel Nour. According to the number of users and the applications being used, the number of antennas in play will vary. If a lot of machines are trying to access the network, multiple antennas can be used at the same time. In the opposite situation, a single antenna could be enough.

Thanks to algorithms, the base stations learn to recognize the different characteristics of devices, like the kinds of applications being used when the device is connected. This allows the intensity of the signal emitted by the antennas to be adapted, in order to serve users appropriately. For example, a streaming service will need a higher bit rate, and therefore a stronger transmitting power than a messaging application.

One of the challenges is to design high-performing algorithms that are energy-efficient,” explains Abdel Nour. By reducing energy consumption, the objective is to generate lower operating costs than the current network architecture, while allowing for a significant rise in the number of connected users. NOMA and other research into interference are part of an overall approach to increase network hosting capability. With the developments in the Internet of Things in particular, this work will prove to be necessary to avoid information traffic jams.

Rémy Fauvel

interference

Interference: a source of telecommunications problems

The growing number of connected objects is set to cause a concurrent increase in interference, a phenomenon which has remained an issue since the birth of telecommunications. In the past decade, more and more research has been undertaken in this area, leading us to revisit the way in which devices handle interference.

Throughout the history of telecommunications, we have observed an increase in the quantities of information being exchanged,” states Laurent Clavier, telecommunications researcher at IMT Nord Europe. “This phenomenon can be explained by network densification in particular,” adds the researcher. The increase in the amount of data circulating is paired with a rise in interference, which represents a problem for network operations.

To understand what interference is, first, we need to understand what a receiver is. In the field of telecommunications, a receiver is a device that converts a signal into usable information — like an electromagnetic wave into a voice. Sometimes, undesired signals disrupt the functioning of a receiver and damage the communication between several devices. This phenomenon is known as interference and the undesired signal, noise. It can cause voice distortion during a telephone call, for example.

Interference occurs when multiple machines use the same frequency band at the same time. To avoid interference, receivers choose which signals they pick up and which they drop. While telephone networks are organized to avoid two smartphones interfering with each other, this is not the case for the Internet of Things, where interference is becoming critical.

Read on I’MTech: Better network-sharing with NOMA

Different kinds of noise causing interference

With the boom in the number of connected devices, the amount of interference will increase and cause the network to deteriorate. By improving machine receivers, it appears possible to mitigate this damage. Most connected devices are equipped with receivers adapted for Gaussian noise. These receivers make the best decisions possible as long as the signal received is powerful enough.

By studying how interference occurs, scientists have understood that it does not follow a Gaussian model, but rather an impulsive one. “Generally, there are very few objects that function together at the same time as ours and near our receiver,” explains Clavier. “Distant devices generate weak interference, whereas closer devices generate strong interference: this is the phenomenon that characterizes impulsive interference,” he specifies.

Reception strategies implemented for Gaussian noise do not account for the presence of these strong noise values. They are therefore easily misled by impulsive noise, with receivers no longer able to recover the useful information. “By designing receivers capable of processing the different kinds of interference that occur in real life, the network will be more robust and able to host more devices,” adds the researcher.

Adaptable receivers

For a receiver to be able to understand Gaussian and non-Gaussian noise, it needs to be able to identify its environment. If a device receives a signal that it wishes to decode while the signal of another nearby device is generating interference, it will use an impulsive model to deal with the interference and decode the useful signal properly. If it is in an environment in which the devices are all relatively far away, it will analyze the interference with a Gaussian model.

To correctly decode a message, the receiver must adapt its decision-making rule to the context. To do so, Clavier indicates that a “receiver may be equipped with mechanisms that allow it to calculate the level of trust in the data it receives in a way that is adapted to the properties of the noise. It will therefore be capable of adapting to both Gaussian and impulsive noise.” This method, used by the researcher to design receivers, means that the machine does not have to automatically know its environment.

Currently, industrial actors are not particularly concerned with the nature of interference. However, they are interested in the means available to avoid it. In other words, they do not see the usefulness of questioning the Gaussian model and undertaking research into the way in which interference is produced. For Clavier, this lack of interest will be temporary, and “in time, we will realize that we will need to use this kind of receiver in devices,” he notes. “From then on, engineers will probably start to include these devices more and more in the tools they develop,” the researcher hopes.

Rémy Fauvel

Photographie d'une tour de téléphonie cellulaire 5G

Open RAN opening mobile networks

With the objective of standardizing equipment in base stations, EURECOM is working on the Open RAN. This project aims to open the equipment manufacturing market to new companies, to encourage the design of innovative material for telecommunications networks.

Base stations, often called relay antennas, are systems that allow telephones and computers to connect to the network. They are owned by telecommunications operators, and the equipment used is provided by a small number of specialized companies. The components manufactured by some are incompatible with those designed by others, which prevents operators from building antennas with the elements of their choice. The roll-out of networks such as 5G depend on this private technology.  

To allow new companies to introduce innovation to networks without being caught up in the games between the various parties, EURECOM is working on the Open RAN project (Open Radio Access Network). It aims to standardize the operation of base station components to make them compatible, no matter their manufacturer, using new network architecture that gets around each manufacturer’s specific component technology. For this, EURECOM is using the Open Air Interface platform, which allows industrial and academic actors to develop and test new software solutions and architectures for 4G and 5G networks. This work is performed in an open-source framework, which allows all actors to find common ground for collaboration on interoperability, eliminating questions of the components’ origin.

Read on I’MTech: OpenAirInterface: An open platform for establishing the 5G system of the future

The Open RAN can be broken down into three key blocks: the radio antenna, distributed unit and centralized unit”, describes Florian Kaltenberger, computer science researcher at EURECOM. The role of the antenna is to receive and send signals to and from telephones, while the second two elements serve to give the radio signal network access, so that users can watch videos or send messages, for example. Unlike radio units, which require specific equipment, “distributed units and centralized units can function with conventional IT material, like servers and PCs,” explains the researcher. There is no longer a need to rely on specially developed, proprietary equipment. Servers and PCs already know how to interact together, independently of their components.

RIC: the key to adaptability

This standardization would allow users of one network to use antennas from another, in the event that their operator’s antennas are too far away. To make the Open RAN function, researchers have developed the RAN Intelligent Controller (RIC), software that represents the heart of this architecture, in a way. The RIC functions thanks to artificial intelligence, which provides indications about a network’s status and guides the activity of base stations to adapt to various scenarios.

For example, if we want to set up a network in a university, we would not approach it in the same way as if we wanted to set one up in a factory, as the issues to be resolved are not the same,” explains Kaltenberger. “In a factory, the interface makes it possible to connect machines to the network in order to make them work together and receive information,” he adds. The RIC is also capable of locating the position of users and adjusting the antenna configurations, which optimizes the network’s operations by allowing for more equitable access between users. For industry, the Open RAN represents an interesting alternative to the telecommunications networks of major operators, due to its low cost and ability to manage energy consumption in a more considered way, evaluating the needs for transmission strength required by users. This system can therefore provide the power users need, and no more.

Simple tools in service of free software

According to Kaltenberger, “the Open RAN architecture would allow for complete control of networks, which could contribute to greater sovereignty.” For the researcher, the fact that this system is controlled by an open-source program ensures a certain level of transparency. The companies involved in developing the software are not the only ones to have access to it. Users can also improve and check the code. Furthermore, if the companies in charge of the Open RAN were to shut down, the system would remain functional, as it was created to exist independently of industrial actors.

At present, multiple research projects around the world have shown that the Open RAN functions, but it is not yet ready to be deployed,” explains Kaltenberger. One of the reasons is the reticence of equipment manufacturers to standardize their material, as this would open the market to new competitors and thereby put an end to their commercial domination. Kaltenberger believes that it will be necessary “to wait for perhaps five more years before standardized systems come on the market”.

Rémy Fauvel

Infographie représentant l'interconnexion entre différents systèmes

Machines surfing the web

There has been constant development in the area of object interconnection via the internet. And this trend is set to continue in years to come. One of the solutions for machines to communicate with each other is the Semantic Web. Here are some explanations of this concept.

 “The Semantic Web gives machines similar web access to that of humans,” indicates Maxime Lefrançois, Artificial Intelligence researcher at Mines Saint-Etienne. This area of the web is currently being used by companies to gather and share information, in particular for users. It makes it possible to adapt product offers to consumer profiles, for example. At present, the Semantic Web occupies an important position in research undertaken around the Internet of Things, i.e. the interconnection between machines and objects connected via the internet.

By making machines work together, the Internet of Things can be a means of developing new applications. This would serve both individuals and professional sectors, such as intelligent buildings or digital agriculture. The last two examples are also the subject of the CoSWoT1 project, funded by the French National Research Agency (ANR). This initiative, in which Maxime Lefrançois is participating, aims to provide new knowledge around the use of the Semantic Web by devices.

To do so, the projects’ researchers installed sensors and actuators in the INSA Lyon buildings on the LyonTech-la Doua campus, the Espace Fauriel building of Mines Saint-Etienne, and the INRAE experimental farm in Montoldre. These sensors record information, like the opening of a window or the temperature and CO2 levels in a room. Thanks to a digital representation of a building or block, scientists can construct applications that use the information provided by sensors, enrich it and make decisions that are transmitted to actuators.

Such applications can measure the CO2 concentration in a room, and according to a pre-set threshold, open the windows automatically for fresh air. This could be useful in the current pandemic context, to reduce the viral load in the air and thereby reduce the risk of infection. Beyond the pandemic, the same sensors and actuators can be used in other cases for other purposes, such as to prevent the build-up of pollutants in indoor air.

A dialog with cards

The main characteristic of the Semantic Web is that it registers information in knowledge graphs: kinds of maps made up of nodes representing objects, machines or concepts, and arcs that connect them to one another, representing their relationships. Each hub and arc is registered with an Internationalized Resource Identifier (IRI): a code that makes it possible for machines to recognize each other and identify and control objects such as a window, or concepts such as temperature.

Depending on the number of knowledge graphs built up and the amount of information contained, a device will be able to identify objects and items of interest with varying degrees of precision. A graph that recognizes a temperature identifier will indicate, depending on its accuracy, the unit used to measure it. “By combining multiple knowledge graphs, you obtain a graph that is more complete, but also more complex,” declares Lefrançois. “The more complex the graph, the longer it will take for the machine to decrypt,” adds the researcher.

Means to optimize communication

The objective of the CoSWoT project is to simplify dialog between autonomous devices. It is a question of ‘integrating’ the complex processing linked with the Semantic Web into objects with low calculating capabilities and limiting the amount of data exchanged in wireless communication to preserve their batteries. This represents a challenge for Semantic Web research.  “It needs to be possible to integrate and send a small knowledge graph in a tiny amount of data,” explains Lefrançois. This optimization makes it possible to improve the speed of data exchanges and related decision-making, as well as to contribute greater energy efficiency.

With this in mind, the researcher is interested in what he calls ‘semantic interoperability’, with the aim of “ensuring that all kinds of machines understand the content of messages that they exchange,” he states. Typically, a connected window produced by one company must be able to be understood by a CO2 sensor developed by another company, which itself must be understood by the connected window. There are two approaches to achieve this objective. “The first is that machines use the same dictionary to understand their messages,” specifies Lefrançois, “The second involves ensuring that machines solve a sort of treasure hunt to find how to understand the messages that they receive,” he continues. In this way, devices are not limited by language.

IRIs in service of language

Furthermore, solving these treasure hunts is allowed by IRIs and the use of the web. “When a machine receives an IRI, it does not need to automatically know how to use it,” declares Lefrancois. “If it receives an IRI that it does not know how to use, it can find information on the Semantic Web to learn how,” he adds. This is analogous to how humans may search for expressions that they do not understand online, or learn how to say a word in a foreign language that they do not know.

However, for now, there are compatibility problems between various devices, due precisely to the fact that they are designed by different manufacturers. “In the medium term, the CoSWoT project could influence the standardization of device communication protocols, in order to ensure compatibility between machines produced by different manufacturers,” the researcher considers. It will be a necessary stage in the widespread roll-out of connected objects in our everyday lives and in companies.

While research firms are fighting to best estimate the position that the Internet of Things will hold in the future, all agree that the world market for this sector will represent hundreds of billions of dollars in five years’ time. As for the number of objects connected to the internet, there could be as many as 20 to 30 billion by 2030, i.e. far more than the number of humans. And with the objects likely to use the internet more than us, optimizing their traffic is clearly a key challenge.

[1] The CoSWoT project is a collaboration between the LIMOS laboratory (UMR CNRS 6158 which includes Mines Saint-Étienne), LIRIS (UMR CNRS 5205), Hubert Curien laboratory (UMR CNRS 5516) INRAE, and the company Mondeca.

Rémy Fauvel

Read on I’MTech

France’s elderly care system on the brink of crisis

In his book Les Fossoyeurs (The Gravediggers), independent journalist Victor Castanet challenges the management of the private elderly care facilities of world leader Orpea, reopening the debate around the economic model – whether for-profit or not – of these structures. Ilona Delouette and Laura Nirello, Health Economics researchers at IMT Nord Europe, have investigated the consequences of public (de)regulation in the elderly care sector. Here, we decode a system that is currently on the brink of crisis.

The Orpea scandal has been at the center of public debate for several weeks. Rationing medications and food, a system of kickback bribes, cutting corners on tasks and detrimental working conditions are all practices that Orpea group is accused of by Victor Castenet in his book,”Les Fossoyeurs”. Through this abuse in private facilities is currently casting aspersions on the entire sector, professionals, families, NGOs, journalists and researchers have been denouncing such dysfunction for several years.

Ilona Delouette and Laura Nirello, Health Economics researchers at IMT Nord Europe, have been studying public regulation in the elderly care sector since 2015. During their inquiries, the two researchers have met policy makers, directors and employees of these structures. They came to the same conclusion for all the various kinds of establishments: “in this sector, the challenging burden of funding is now omnipresent, and working conditions and services provided have been continuously deteriorating,” emphasizes Laura Nirello. For the researchers, these new revelations about the Orpea group reveal a basic trend more than anything else: the progressive competition between these establishments and cost-cutting imperatives has put more and more distance between them and their original missions.

From providing care for dependents…

In 1997, to deal with the growth in the number of dependent elderly, the category of nursing homes known as ‘Ehpad’ was created. “Since the 1960s, there has been debate around providing care for the elderly with decreasing autonomy from the public budget. In 1997, the decision was made to remove loss of autonomy from the social security system; it became the responsibility of the departments,” explained Delouette. From then on, public organizations, such as those in the social and solidarity-based economy (SSE), entered into competition with private, for-profit establishments.  25 years later, out of the 7,400 nursing homes in France that house a little less than 600,000 residents, nearly 50% of them are public, around 30% are private, not-for-profit (SSE) and around 25% are private and for-profit.

The (complex) funding of these structures, regulated by regional health agencies (ARS) and departmental councils, is organized into three sections: the ‘care’ section (nursing personnel, medical material, etc.) handled by the French public health insurance body Assurance Maladie; the ‘dependence’ section (daily life assistance, carers, etc.) managed by the departments via the personal autonomy benefit (APA); and the final section, accommodation fees, which covers lodgings, activities and catering, at the charge of residents and their family.

“Public funding is identical for all structures, whether private — for-profit or not-for-profit — or public. It’s often the cost of accommodation, which is less regulated, that receives media coverage, as it can run to several thousand euros,” emphasizes Nirello. “And it is mainly on this point that we see major disparities, justified by the private for-profit sector by higher real estate costs in urban areas. But it’s mainly because half of these places are owned by companies listed on the stock market, with the profitability demands that this involves,” she continues. And while companies are facing a rise in dependence and need for care from their residents, funding is frozen.

…to the financialization of the elderly

A structure’s resources are determined by its residents’ average level of dependency, transposed to working time. This is evaluated according to the AGGIR table (“Autonomie Gérontologie Groupes Iso-Ressources” or Autonomy Gerontology Iso-Resource Groups): GIR 1 and 2 correspond to a state of total or severe dependence, GIR 6 to people who are completely independent. Nearly half of nursing home residents belong to GIR 1 and 2, and more than a third to GIR 3 and 4. “While for-profit homes are positioned for very high dependence, public and SSE establishments seek to have a more balanced mix. They are often older and have difficulties investing in new, adapted facilities to handle highly dependent residents,” indicates Nirello. Paradoxically, the rate of assistants to residents is very different according to a nursing home’s status: 67% for public homes, 53% for private not-for-profit and 49% for private for-profit.

In the context of tightening public purse strings, this goes alongside a phenomenon of extreme corner-cutting for treatment, with each operation charged for. “Elderly care nurses need time to take care of residents: autonomy is fundamentally connected to social interaction,” insists Delouette. The Hospital, Patients, Health, Territories law strengthened competition between the various structures: from 2009, new authorizations for nursing home creation and extension were established based on calls for project issued by ARSs. For the researcher, “this system once again places groups of establishments in competition for new locations, as funding is awarded to the best option in terms of price and service quality, no matter its status. We know who wins: 20% public, and 40-40 for private for-profit/not-for-profit. What we don’t know is who responds to these calls for project. With tightened budgets, is the public sector no longer responding or is this a choice by regulators in the field?”

What is the future for nursing homes?

“Funding, cutting corners, a managerial view of caring for dependents: the entire system needs to be redesigned. We don’t have solutions, we are making observations,” emphasizes Nirello.But despite promises, reform has been delayed too long.The Elderly and Autonomy law, the most recent effort in this area, was announced by the current government and buried in late 2019, despite two parliamentary reports highlighting the serious aged care crisis (the mission for nursing homes in March 2018 and the Libault report in March 2019).

In 2030, Insee estimates that there will be 108,000 more dependent elderly people; 4 million in total in 2050. How can we prepare for this demographic evolution, currently underway? Just to cover the increased costs of caring for the elderly with loss of autonomy, it would take €9 billion every year until 2030. “We can always increase funding; the question is how we fund establishments. If we continue to try to cut corners on care and tasks, this goes against the social mission of these structures. Should vulnerable people be sources of profit? Is society prepared to invest more in taking care of dependent people?” asks Delouette. “This is society’s choice.” The two researchers are currently working on the management of the pandemic in nursing homes. For them, there is still a positive side to all this: the state of elderly care has never been such a hot topic.

Anne-Sophie Boutaud

Also read on I’MTech:

AI-4-Child “Chaire” research consortium: innovative tools to fight against childhood cerebral palsy

In conjunction with the GIS BeAChild, the AI-4-Child team is using artificial intelligence to analyze images related to cerebral palsy in children. This could lead to better diagnoses, innovative therapies and progress in patient rehabilitation. But also a real breakthrough in medical imaging.

The original version of this article was published on the IMT Atlantique website, in the News section.

Cerebral palsy is the leading cause of motor disability in children, affecting nearly two out of every 1,000 newborns. And it is irreversible. The AI-4-Child chaire (French research consortium), managed by IMT Atlantique and the Brest University Hospital, is dedicated to fighting this dreaded disease, using artificial intelligence and deep learning, which could eventually revolutionize the field of medical imaging.

“Cerebral palsy is the result of a brain lesion that occurs around birth,” explains François Rousseau, head of the consortium, professor at IMT Atlantique and a researcher at the Medical Information Processing Laboratory (LaTIM, INSERM unit). “There are many possible causes – prematurity or a stroke in utero, for example. This lesion, of variable importance, is not progressive. The resulting disability can be more or less severe: some children have to use a wheelchair, while others can retain a certain degree of independence.”

Created in 2020, AI-4-Child brings together engineers and physicians. The result of a call for ‘artificial intelligence’ projects from the French National Research Agency (ANR), it operates in partnership with the company Philips and the Ildys Foundation for the Disabled, and benefits from various forms of support (Brittany Region, Brest Metropolis, etc.). In total, the research program has a budget of around €1 million for a period of five years.

Chaire AI-4-Child, François Rousseau
François Rousseau, professor at IMT Atlantique and head of the AI-4-Child chaire (research consortium)

Hundreds of children being studied in Brest

AI-4-Child works closely with BeAChild*, the first French Scientific Interest Group (GIS) dedicated to pediatric rehabilitation, headed by Sylvain Brochard, professor of physical medicine and rehabilitation (MPR). Both structures are linked to the LaTIM lab (INSERM UMR 1101), housed within the Brest CHRU teaching hospital. The BeAChild team is also highly interdisciplinary, bringing together engineers, doctors, pediatricians and physiotherapists, as well as psychologists.

Hundreds of children from all over France and even from several European countries are being followed at the CHRU and at Ty Yann (Ildys Foundation). By bringing together all the ‘stakeholders’ – patients and families, health professionals and imaging specialists – on the same site, Brest offers a highly innovative approach, which has made it a reference center for the evaluation and treatment of cerebral palsy. This has enabled the development of new therapies to improve children’s autonomy and made it possible to design specific applications dedicated to their rehabilitation.

“In this context, the mission of the chair consists of analyzing, via artificial intelligence, the imagery and signals obtained by MRI, movement analysis or electroencephalograms,” says Rousseau. These observations can be made from the fetal stage or during the first years of a child’s life. The research team is working on images of the brain (location of the lesion, possible compensation by the other hemisphere, link with the disability observed, etc.), but also on images of the neuro-musculo-skeletal system, obtained using dynamic MRI, which help to understand what is happening inside the joints.

‘Reconstructing’ faulty images with AI

But this imaging work is complex. The main pitfall is the poor quality of the images collected, due to camera shake or artifacts during the shooting. So AI-4-Child is trying to ‘reconstruct’ them, using artificial intelligence and deep learning. “We are relying in particular on good quality views from other databases to achieve satisfactory resolution,” explains the researcher. Eventually, these methods should be able to be applied to routine images.

Significant progress has already been made. A doctoral student is studying images of the ankle obtained in dynamic MRI and ‘enriched’ by other images using AI – static images, but in very high resolution. “Despite a rather poor initial quality, we can obtain decent pictures,” notes Rousseau.  Significant differences between the shapes of the ankle bone structure were observed between patients and are being interpreted with the clinicians. The aim will then be to better understand the origin of these deformations and to propose adjustments to the treatments under consideration (surgery, toxin, etc.).

The second area of work for AI-4-Child is rehabilitation. Here again, imaging plays an important role: during rehabilitation courses, patients’ gait is filmed using infrared cameras and a system of sensors and force plates in the movement laboratory at the Brest University Hospital. The ‘walking signals’ collected in this way are then analyzed using AI. For the moment, the team is in the data acquisition phase.

Several areas of progress

The problem, however, is that a patient often does not walk in the same way during the course and when they leave the hospital. “This creates a very strong bias in the analysis,” notes Rousseau. “We must therefore check the relevance of the data collected in the hospital environment… and focus on improving the quality of life of patients, rather than the shape of their bones.”

Another difficulty is that the data sets available to the researchers are limited to a few dozen images – whereas some AI applications require several million, not to mention the fact that this data is not homogeneous, and that there are also losses. “We have therefore become accustomed to working with little data,” says Rousseau. “We have to make sure that the quality of the data is as good as possible.” Nevertheless, significant progress has already been made in rehabilitation. Some children are able to ride a bike, tie their shoes, or eat independently.

In the future, the AI-4-Child team plans to make progress in three directions: improving images of the brain, observing bones and joints, and analyzing movement itself. The team also hopes to have access to more data, thanks to a European data collection project. Rousseau is optimistic: “Thanks to data processing, we may be able to better characterize the pathology, improve diagnosis and even identify predictive factors for the disease.”

* BeAChild brings together the Brest University Hospital Centre, IMT Atlantique, the Ildys Foundation and the University of Western Brittany (UBO). Created in 2020 and formalized in 2022 (see the French press release), the GIS is the culmination of a collaboration that began some fifteen years ago on the theme of childhood disability.