Photographie d'une tour de téléphonie cellulaire 5G

Open RAN opening mobile networks

With the objective of standardizing equipment in base stations, EURECOM is working on the Open RAN. This project aims to open the equipment manufacturing market to new companies, to encourage the design of innovative material for telecommunications networks.

Base stations, often called relay antennas, are systems that allow telephones and computers to connect to the network. They are owned by telecommunications operators, and the equipment used is provided by a small number of specialized companies. The components manufactured by some are incompatible with those designed by others, which prevents operators from building antennas with the elements of their choice. The roll-out of networks such as 5G depend on this private technology.  

To allow new companies to introduce innovation to networks without being caught up in the games between the various parties, EURECOM is working on the Open RAN project (Open Radio Access Network). It aims to standardize the operation of base station components to make them compatible, no matter their manufacturer, using new network architecture that gets around each manufacturer’s specific component technology. For this, EURECOM is using the Open Air Interface platform, which allows industrial and academic actors to develop and test new software solutions and architectures for 4G and 5G networks. This work is performed in an open-source framework, which allows all actors to find common ground for collaboration on interoperability, eliminating questions of the components’ origin.

Read on I’MTech: OpenAirInterface: An open platform for establishing the 5G system of the future

The Open RAN can be broken down into three key blocks: the radio antenna, distributed unit and centralized unit”, describes Florian Kaltenberger, computer science researcher at EURECOM. The role of the antenna is to receive and send signals to and from telephones, while the second two elements serve to give the radio signal network access, so that users can watch videos or send messages, for example. Unlike radio units, which require specific equipment, “distributed units and centralized units can function with conventional IT material, like servers and PCs,” explains the researcher. There is no longer a need to rely on specially developed, proprietary equipment. Servers and PCs already know how to interact together, independently of their components.

RIC: the key to adaptability

This standardization would allow users of one network to use antennas from another, in the event that their operator’s antennas are too far away. To make the Open RAN function, researchers have developed the RAN Intelligent Controller (RIC), software that represents the heart of this architecture, in a way. The RIC functions thanks to artificial intelligence, which provides indications about a network’s status and guides the activity of base stations to adapt to various scenarios.

For example, if we want to set up a network in a university, we would not approach it in the same way as if we wanted to set one up in a factory, as the issues to be resolved are not the same,” explains Kaltenberger. “In a factory, the interface makes it possible to connect machines to the network in order to make them work together and receive information,” he adds. The RIC is also capable of locating the position of users and adjusting the antenna configurations, which optimizes the network’s operations by allowing for more equitable access between users. For industry, the Open RAN represents an interesting alternative to the telecommunications networks of major operators, due to its low cost and ability to manage energy consumption in a more considered way, evaluating the needs for transmission strength required by users. This system can therefore provide the power users need, and no more.

Simple tools in service of free software

According to Kaltenberger, “the Open RAN architecture would allow for complete control of networks, which could contribute to greater sovereignty.” For the researcher, the fact that this system is controlled by an open-source program ensures a certain level of transparency. The companies involved in developing the software are not the only ones to have access to it. Users can also improve and check the code. Furthermore, if the companies in charge of the Open RAN were to shut down, the system would remain functional, as it was created to exist independently of industrial actors.

At present, multiple research projects around the world have shown that the Open RAN functions, but it is not yet ready to be deployed,” explains Kaltenberger. One of the reasons is the reticence of equipment manufacturers to standardize their material, as this would open the market to new competitors and thereby put an end to their commercial domination. Kaltenberger believes that it will be necessary “to wait for perhaps five more years before standardized systems come on the market”.

Rémy Fauvel

Infographie représentant l'interconnexion entre différents systèmes

Machines surfing the web

There has been constant development in the area of object interconnection via the internet. And this trend is set to continue in years to come. One of the solutions for machines to communicate with each other is the Semantic Web. Here are some explanations of this concept.

 “The Semantic Web gives machines similar web access to that of humans,” indicates Maxime Lefrançois, Artificial Intelligence researcher at Mines Saint-Etienne. This area of the web is currently being used by companies to gather and share information, in particular for users. It makes it possible to adapt product offers to consumer profiles, for example. At present, the Semantic Web occupies an important position in research undertaken around the Internet of Things, i.e. the interconnection between machines and objects connected via the internet.

By making machines work together, the Internet of Things can be a means of developing new applications. This would serve both individuals and professional sectors, such as intelligent buildings or digital agriculture. The last two examples are also the subject of the CoSWoT1 project, funded by the French National Research Agency (ANR). This initiative, in which Maxime Lefrançois is participating, aims to provide new knowledge around the use of the Semantic Web by devices.

To do so, the projects’ researchers installed sensors and actuators in the INSA Lyon buildings on the LyonTech-la Doua campus, the Espace Fauriel building of Mines Saint-Etienne, and the INRAE experimental farm in Montoldre. These sensors record information, like the opening of a window or the temperature and CO2 levels in a room. Thanks to a digital representation of a building or block, scientists can construct applications that use the information provided by sensors, enrich it and make decisions that are transmitted to actuators.

Such applications can measure the CO2 concentration in a room, and according to a pre-set threshold, open the windows automatically for fresh air. This could be useful in the current pandemic context, to reduce the viral load in the air and thereby reduce the risk of infection. Beyond the pandemic, the same sensors and actuators can be used in other cases for other purposes, such as to prevent the build-up of pollutants in indoor air.

A dialog with cards

The main characteristic of the Semantic Web is that it registers information in knowledge graphs: kinds of maps made up of nodes representing objects, machines or concepts, and arcs that connect them to one another, representing their relationships. Each hub and arc is registered with an Internationalized Resource Identifier (IRI): a code that makes it possible for machines to recognize each other and identify and control objects such as a window, or concepts such as temperature.

Depending on the number of knowledge graphs built up and the amount of information contained, a device will be able to identify objects and items of interest with varying degrees of precision. A graph that recognizes a temperature identifier will indicate, depending on its accuracy, the unit used to measure it. “By combining multiple knowledge graphs, you obtain a graph that is more complete, but also more complex,” declares Lefrançois. “The more complex the graph, the longer it will take for the machine to decrypt,” adds the researcher.

Means to optimize communication

The objective of the CoSWoT project is to simplify dialog between autonomous devices. It is a question of ‘integrating’ the complex processing linked with the Semantic Web into objects with low calculating capabilities and limiting the amount of data exchanged in wireless communication to preserve their batteries. This represents a challenge for Semantic Web research.  “It needs to be possible to integrate and send a small knowledge graph in a tiny amount of data,” explains Lefrançois. This optimization makes it possible to improve the speed of data exchanges and related decision-making, as well as to contribute greater energy efficiency.

With this in mind, the researcher is interested in what he calls ‘semantic interoperability’, with the aim of “ensuring that all kinds of machines understand the content of messages that they exchange,” he states. Typically, a connected window produced by one company must be able to be understood by a CO2 sensor developed by another company, which itself must be understood by the connected window. There are two approaches to achieve this objective. “The first is that machines use the same dictionary to understand their messages,” specifies Lefrançois, “The second involves ensuring that machines solve a sort of treasure hunt to find how to understand the messages that they receive,” he continues. In this way, devices are not limited by language.

IRIs in service of language

Furthermore, solving these treasure hunts is allowed by IRIs and the use of the web. “When a machine receives an IRI, it does not need to automatically know how to use it,” declares Lefrancois. “If it receives an IRI that it does not know how to use, it can find information on the Semantic Web to learn how,” he adds. This is analogous to how humans may search for expressions that they do not understand online, or learn how to say a word in a foreign language that they do not know.

However, for now, there are compatibility problems between various devices, due precisely to the fact that they are designed by different manufacturers. “In the medium term, the CoSWoT project could influence the standardization of device communication protocols, in order to ensure compatibility between machines produced by different manufacturers,” the researcher considers. It will be a necessary stage in the widespread roll-out of connected objects in our everyday lives and in companies.

While research firms are fighting to best estimate the position that the Internet of Things will hold in the future, all agree that the world market for this sector will represent hundreds of billions of dollars in five years’ time. As for the number of objects connected to the internet, there could be as many as 20 to 30 billion by 2030, i.e. far more than the number of humans. And with the objects likely to use the internet more than us, optimizing their traffic is clearly a key challenge.

[1] The CoSWoT project is a collaboration between the LIMOS laboratory (UMR CNRS 6158 which includes Mines Saint-Étienne), LIRIS (UMR CNRS 5205), Hubert Curien laboratory (UMR CNRS 5516) INRAE, and the company Mondeca.

Rémy Fauvel

Read on I’MTech

France’s elderly care system on the brink of crisis

In his book Les Fossoyeurs (The Gravediggers), independent journalist Victor Castanet challenges the management of the private elderly care facilities of world leader Orpea, reopening the debate around the economic model – whether for-profit or not – of these structures. Ilona Delouette and Laura Nirello, Health Economics researchers at IMT Nord Europe, have investigated the consequences of public (de)regulation in the elderly care sector. Here, we decode a system that is currently on the brink of crisis.

The Orpea scandal has been at the center of public debate for several weeks. Rationing medications and food, a system of kickback bribes, cutting corners on tasks and detrimental working conditions are all practices that Orpea group is accused of by Victor Castenet in his book,”Les Fossoyeurs”. Through this abuse in private facilities is currently casting aspersions on the entire sector, professionals, families, NGOs, journalists and researchers have been denouncing such dysfunction for several years.

Ilona Delouette and Laura Nirello, Health Economics researchers at IMT Nord Europe, have been studying public regulation in the elderly care sector since 2015. During their inquiries, the two researchers have met policy makers, directors and employees of these structures. They came to the same conclusion for all the various kinds of establishments: “in this sector, the challenging burden of funding is now omnipresent, and working conditions and services provided have been continuously deteriorating,” emphasizes Laura Nirello. For the researchers, these new revelations about the Orpea group reveal a basic trend more than anything else: the progressive competition between these establishments and cost-cutting imperatives has put more and more distance between them and their original missions.

From providing care for dependents…

In 1997, to deal with the growth in the number of dependent elderly, the category of nursing homes known as ‘Ehpad’ was created. “Since the 1960s, there has been debate around providing care for the elderly with decreasing autonomy from the public budget. In 1997, the decision was made to remove loss of autonomy from the social security system; it became the responsibility of the departments,” explained Delouette. From then on, public organizations, such as those in the social and solidarity-based economy (SSE), entered into competition with private, for-profit establishments.  25 years later, out of the 7,400 nursing homes in France that house a little less than 600,000 residents, nearly 50% of them are public, around 30% are private, not-for-profit (SSE) and around 25% are private and for-profit.

The (complex) funding of these structures, regulated by regional health agencies (ARS) and departmental councils, is organized into three sections: the ‘care’ section (nursing personnel, medical material, etc.) handled by the French public health insurance body Assurance Maladie; the ‘dependence’ section (daily life assistance, carers, etc.) managed by the departments via the personal autonomy benefit (APA); and the final section, accommodation fees, which covers lodgings, activities and catering, at the charge of residents and their family.

“Public funding is identical for all structures, whether private — for-profit or not-for-profit — or public. It’s often the cost of accommodation, which is less regulated, that receives media coverage, as it can run to several thousand euros,” emphasizes Nirello. “And it is mainly on this point that we see major disparities, justified by the private for-profit sector by higher real estate costs in urban areas. But it’s mainly because half of these places are owned by companies listed on the stock market, with the profitability demands that this involves,” she continues. And while companies are facing a rise in dependence and need for care from their residents, funding is frozen.

…to the financialization of the elderly

A structure’s resources are determined by its residents’ average level of dependency, transposed to working time. This is evaluated according to the AGGIR table (“Autonomie Gérontologie Groupes Iso-Ressources” or Autonomy Gerontology Iso-Resource Groups): GIR 1 and 2 correspond to a state of total or severe dependence, GIR 6 to people who are completely independent. Nearly half of nursing home residents belong to GIR 1 and 2, and more than a third to GIR 3 and 4. “While for-profit homes are positioned for very high dependence, public and SSE establishments seek to have a more balanced mix. They are often older and have difficulties investing in new, adapted facilities to handle highly dependent residents,” indicates Nirello. Paradoxically, the rate of assistants to residents is very different according to a nursing home’s status: 67% for public homes, 53% for private not-for-profit and 49% for private for-profit.

In the context of tightening public purse strings, this goes alongside a phenomenon of extreme corner-cutting for treatment, with each operation charged for. “Elderly care nurses need time to take care of residents: autonomy is fundamentally connected to social interaction,” insists Delouette. The Hospital, Patients, Health, Territories law strengthened competition between the various structures: from 2009, new authorizations for nursing home creation and extension were established based on calls for project issued by ARSs. For the researcher, “this system once again places groups of establishments in competition for new locations, as funding is awarded to the best option in terms of price and service quality, no matter its status. We know who wins: 20% public, and 40-40 for private for-profit/not-for-profit. What we don’t know is who responds to these calls for project. With tightened budgets, is the public sector no longer responding or is this a choice by regulators in the field?”

What is the future for nursing homes?

“Funding, cutting corners, a managerial view of caring for dependents: the entire system needs to be redesigned. We don’t have solutions, we are making observations,” emphasizes Nirello.But despite promises, reform has been delayed too long.The Elderly and Autonomy law, the most recent effort in this area, was announced by the current government and buried in late 2019, despite two parliamentary reports highlighting the serious aged care crisis (the mission for nursing homes in March 2018 and the Libault report in March 2019).

In 2030, Insee estimates that there will be 108,000 more dependent elderly people; 4 million in total in 2050. How can we prepare for this demographic evolution, currently underway? Just to cover the increased costs of caring for the elderly with loss of autonomy, it would take €9 billion every year until 2030. “We can always increase funding; the question is how we fund establishments. If we continue to try to cut corners on care and tasks, this goes against the social mission of these structures. Should vulnerable people be sources of profit? Is society prepared to invest more in taking care of dependent people?” asks Delouette. “This is society’s choice.” The two researchers are currently working on the management of the pandemic in nursing homes. For them, there is still a positive side to all this: the state of elderly care has never been such a hot topic.

Anne-Sophie Boutaud

Also read on I’MTech:

Joe Wiart

Télécom Paris | Millimeter waves, Electromagnetism, Dosimetry

[toggle title=”Find all his articles on I’MTech” state=”open”]

[/toggle]

AI-4-Child “Chaire” research consortium: innovative tools to fight against childhood cerebral palsy

In conjunction with the GIS BeAChild, the AI-4-Child team is using artificial intelligence to analyze images related to cerebral palsy in children. This could lead to better diagnoses, innovative therapies and progress in patient rehabilitation. But also a real breakthrough in medical imaging.

The original version of this article was published on the IMT Atlantique website, in the News section.

Cerebral palsy is the leading cause of motor disability in children, affecting nearly two out of every 1,000 newborns. And it is irreversible. The AI-4-Child chaire (French research consortium), managed by IMT Atlantique and the Brest University Hospital, is dedicated to fighting this dreaded disease, using artificial intelligence and deep learning, which could eventually revolutionize the field of medical imaging.

“Cerebral palsy is the result of a brain lesion that occurs around birth,” explains François Rousseau, head of the consortium, professor at IMT Atlantique and a researcher at the Medical Information Processing Laboratory (LaTIM, INSERM unit). “There are many possible causes – prematurity or a stroke in utero, for example. This lesion, of variable importance, is not progressive. The resulting disability can be more or less severe: some children have to use a wheelchair, while others can retain a certain degree of independence.”

Created in 2020, AI-4-Child brings together engineers and physicians. The result of a call for ‘artificial intelligence’ projects from the French National Research Agency (ANR), it operates in partnership with the company Philips and the Ildys Foundation for the Disabled, and benefits from various forms of support (Brittany Region, Brest Metropolis, etc.). In total, the research program has a budget of around €1 million for a period of five years.

Chaire AI-4-Child, François Rousseau
François Rousseau, professor at IMT Atlantique and head of the AI-4-Child chaire (research consortium)

Hundreds of children being studied in Brest

AI-4-Child works closely with BeAChild*, the first French Scientific Interest Group (GIS) dedicated to pediatric rehabilitation, headed by Sylvain Brochard, professor of physical medicine and rehabilitation (MPR). Both structures are linked to the LaTIM lab (INSERM UMR 1101), housed within the Brest CHRU teaching hospital. The BeAChild team is also highly interdisciplinary, bringing together engineers, doctors, pediatricians and physiotherapists, as well as psychologists.

Hundreds of children from all over France and even from several European countries are being followed at the CHRU and at Ty Yann (Ildys Foundation). By bringing together all the ‘stakeholders’ – patients and families, health professionals and imaging specialists – on the same site, Brest offers a highly innovative approach, which has made it a reference center for the evaluation and treatment of cerebral palsy. This has enabled the development of new therapies to improve children’s autonomy and made it possible to design specific applications dedicated to their rehabilitation.

“In this context, the mission of the chair consists of analyzing, via artificial intelligence, the imagery and signals obtained by MRI, movement analysis or electroencephalograms,” says Rousseau. These observations can be made from the fetal stage or during the first years of a child’s life. The research team is working on images of the brain (location of the lesion, possible compensation by the other hemisphere, link with the disability observed, etc.), but also on images of the neuro-musculo-skeletal system, obtained using dynamic MRI, which help to understand what is happening inside the joints.

‘Reconstructing’ faulty images with AI

But this imaging work is complex. The main pitfall is the poor quality of the images collected, due to camera shake or artifacts during the shooting. So AI-4-Child is trying to ‘reconstruct’ them, using artificial intelligence and deep learning. “We are relying in particular on good quality views from other databases to achieve satisfactory resolution,” explains the researcher. Eventually, these methods should be able to be applied to routine images.

Significant progress has already been made. A doctoral student is studying images of the ankle obtained in dynamic MRI and ‘enriched’ by other images using AI – static images, but in very high resolution. “Despite a rather poor initial quality, we can obtain decent pictures,” notes Rousseau.  Significant differences between the shapes of the ankle bone structure were observed between patients and are being interpreted with the clinicians. The aim will then be to better understand the origin of these deformations and to propose adjustments to the treatments under consideration (surgery, toxin, etc.).

The second area of work for AI-4-Child is rehabilitation. Here again, imaging plays an important role: during rehabilitation courses, patients’ gait is filmed using infrared cameras and a system of sensors and force plates in the movement laboratory at the Brest University Hospital. The ‘walking signals’ collected in this way are then analyzed using AI. For the moment, the team is in the data acquisition phase.

Several areas of progress

The problem, however, is that a patient often does not walk in the same way during the course and when they leave the hospital. “This creates a very strong bias in the analysis,” notes Rousseau. “We must therefore check the relevance of the data collected in the hospital environment… and focus on improving the quality of life of patients, rather than the shape of their bones.”

Another difficulty is that the data sets available to the researchers are limited to a few dozen images – whereas some AI applications require several million, not to mention the fact that this data is not homogeneous, and that there are also losses. “We have therefore become accustomed to working with little data,” says Rousseau. “We have to make sure that the quality of the data is as good as possible.” Nevertheless, significant progress has already been made in rehabilitation. Some children are able to ride a bike, tie their shoes, or eat independently.

In the future, the AI-4-Child team plans to make progress in three directions: improving images of the brain, observing bones and joints, and analyzing movement itself. The team also hopes to have access to more data, thanks to a European data collection project. Rousseau is optimistic: “Thanks to data processing, we may be able to better characterize the pathology, improve diagnosis and even identify predictive factors for the disease.”

* BeAChild brings together the Brest University Hospital Centre, IMT Atlantique, the Ildys Foundation and the University of Western Brittany (UBO). Created in 2020 and formalized in 2022 (see the French press release), the GIS is the culmination of a collaboration that began some fifteen years ago on the theme of childhood disability.

Déchets plastiques, plastic waste

Plastic waste: transforming a problem into energy

The linear life cycle of plastics, too rarely recycled, exerts a heightened pressure on the environment. The process of gasification makes it possible to reduce this impact by transforming more waste – currently incinerated or left in landfill – into useful resources like hydrogen or biomethane. Javier Escudero, process engineering researcher at IMT Mines Albi, aims to perfect this approach to make it easier to implement locally.

Plastic waste is so abundant, it is practically like rain. While we know well that it is piling up in landfills and oceans, an American study published in the journal Science has also shown that plastic particles are even in the air we breathe. Despite increased levels of pollution, international plastic production continues its explosive growth. However, at the end of the chain, the recycling industry has never managed to keep up with consumption. In France, the average recycling rate for all plastics is 28%, a percentage mainly obtained from bottle recycling (54.5% of the total). The vast majority of these materials is therefore incinerated or sent to landfill.

In order to respond efficiently to the plastic crisis, other forms of reuse or recycling must be developed. “Gasification means we can transform this waste into useful energy vectors, while losing as little material as possible”, explains Javier Escudero, Process Engineering Researcher at IMT Mines Albi. It is an alternative that contributes to a circular economy approach.

Some plastics not so fantastic

Rigid plastics used for bottles are generally made from a single material, which makes it easier to recycle them. For plastic films, which represent 40% of waste deposits, this is not the case. They are made from a multilayer combination of various plastics, such as polyethylene, polyurethane, and so on, sometimes joined with other materials like cardboard. The complex configuration of chemicals makes recycling such packaging too expensive. This means that in recycling centers, these products are overwhelmingly used for solid recovered fuel (SRF) – non-hazardous waste used for energy production. They are incinerated to feed turbines and generate electricity.

Another kind of waste that is ineligible for recycling is packaging from chemical products (industrial and mass market), considered hazardous. Some of the toxic compounds (chorine, sulfur, metals, etc.) are removed from the surface by pre-washing. However, certain atoms are absorbed into the material and cannot be removed by prewashing. This is where the advantages of gasification come in. “It makes it possible to process all plastics – SRF and contaminated ones – with less prewashing beforehand, as well,” emphasizes Escudero.

Moreover, this process has greater capacity for recycling plastic waste than incineration, as it produces chemical compounds that can be reused by industry, The synthesis gases can be burnt to generate energy (heat, electricity) with better yield than combustion. They can also be reprocessed and stored in the form of gas to be used as fuel (biomethane, hydrogen). To achieve this, one of the challenges of research is to observe the influence of pollutants, and therefore the composition of plastics, on products obtained from gasification.

Transforming materials down to the last crumb

Ground-up waste is compacted in the form of pellets, all the same size, to facilitate their transformation into gas in the gasifier. But if you want to recycle as much waste as possible, you need to adapt the gasification operating parameters, depending on the types of plastics contained in the pellets. For example, processing at a low temperature will break the long chains of polymers in plastic films. The molecules are then broken up again in the next step, as is done in petrochemistry. This produces a wide variety of products: hydrogen, methane, acetylene, and heavier molecules as well.

Processing at a higher temperature will produce more synthesis gas. However, it also produces more aromatic molecules like benzene and naphthalene. These compounds have a very stable structure and are very difficult to break into useful molecules. They may turn into soot – solids that build up in pipes – representing a significant loss of materials. The objective of Escudero’s research into gasification is therefore to combine the advantages of these two methods of processing, to avoid solid residue forming while producing as much gas as possible.

To do so, the researcher and his team are mainly focusing on gas injection, which breaks the molecular bonds of the materials being processed. Where and at which point in the process should injection take place? In connection to what? How does the material react? These questions, and many others, must be answered to improve the process.  The gasifier at the Valthera technological platform, located at IMT Mines Albi and used for the tests, can process around 20 kilograms of material per hour. The process recycles not only the materials but also their energy. “Gasification reactions require energy to occur. This means that we use the energy stored in the materials to power their transformation,” explains the researcher.

Use less, convert more

Hydrogen and biomethane obtained through gasification directly power the goals of the French energy transition. Gasification therefore transforms materials made from fossil fuels into renewable energy. However, this process remains restricted to the context of research. “There are still many small aspects to study in designing gasifiers, to make them higher-performing and more mature for a certain amount of material. We are also going to concentrate on purifying synthesis gases with the aim of finding even cheaper solutions,” concludes Escudero. Gasification could supplement waste management channels at a local level. However, cost remains the greatest obstacle to small industrial actors adopting this method.

Anaïs Culot

Planning for the worst with artificial intelligence

Given that catastrophic events are rare by nature, it is difficult to prepare for them. However, artificial intelligence offers high-performing tools for modeling and simulation, and is therefore an excellent tool to design, test and optimize the response to such events. At IMT Mines Alès, Satya Lancel and Cyril Orengo are both undertaking research on emergency evacuations, in case of events like a dam breaking or a terrorist attack in a supermarket.

“Supermarkets are highly complex environments in which individuals are saturated with information,” explains Satya Lancel, PhD student in Risk Science at Université Montpellier III and IMT Mines Alès. Her thesis, which she started over two years ago, is on the subject of affordance, a psychological concept that states that an object or element in the environment is able to suggest its own use. With this research, she wishes to study the link between the cognitive processes involved in decision-making and the functionality of objects in their environment.

In her thesis, Lancel specifically focuses on affordance in the case of an armed attack within a supermarket. She investigates, for example, how to optimize instructions to encourage customers to head towards emergency evacuation exits. “The results of my research could act as a foundation for future research and be used by supermarket brands to improve signage or staff training, in order to improve evacuation procedures”, she explains.

Lancel and her team obtained funding from the brand U to perform their experiments. This agreement allowed them to study the situational and cognitive factors involved in customer decision-making in several U stores. “One thing we did in the first part of my research plan was to observe customer behavior when we added or removed flashing lights at the emergency exits,” she describes. “We remarked that when there was active signage, customers are more likely to decide to head towards the emergency exits than when there was not,” says the scientist. This result suggests that signage has a certain level of importance in guiding people’s decision-making, even if they do not know the evacuation procedure in advance.

 “Given that it is forbidden to perform simulations of armed attacks with real people, we opted for a multi-agent digital simulation”, explains Lancel. What is unique about this kind of simulation is that each agent involved is conceptualized as an autonomous entity with its own characteristics and behavioral model. In these simulations, the agents interact and influence each other with their behavior. “These models are now used more and more in risk science, as they are proving to be extremely useful for analyzing group behavior,” she declares.

To develop this simulation, Lancel collaborated with Vincent Chapurlat, digital systems modeling researcher at IMT Mines Alès. “The model we designed is a three-dimensional representation of the supermarket we are working on,” she indicates. In the simulation, aisles are represented by parallepipeds, while customers and staff are represented by agents defined by points. By observing how agents gather and how the clusters they form move around, interact and organize themselves, it is possible to determine which group behaviors are most common in the event of an armed attack, no matter the characteristics of the individuals.

Representing the complexity of reality

Outside of supermarkets, Cyril Orengo, PhD student in Crisis Management at the Risk Science Laboratory at IMT Mines Alès, is studying population evacuation in the event of dam failure. The case study chosen by Orengo is the Sainte-Cécile-d’Andorge dam, near the city of Alès. Based on digital modeling of the Alès urban area and individuals, he plans to compare evacuation time for a range of scenarios and perform cartography of various city roads that are likely to be blocked. “One of the aims of this work is to build a knowledge base that could be used in the future by researchers working on preventive evacuations,” indicates the doctoral student.

He, too, has chosen to use a multi-agent system to simulate evacuations, as this method makes it possible to combine individual parameters with agents to produce situations that tend to be close to a credible reality. “Among the variables selected in my model are certain socio-economic characteristics of the simulated population,” he specifies. “In a real-life situation, an elderly person may take longer to go somewhere than a young person: the multi-agent system makes it possible to reproduce this,” explains the researcher.

To generate a credible simulation, “you first need to understand the preventive evacuation process,” underlines Orengo, specifying the need “to identify the actors involved, such as citizens and groups, as well as the infrastructure, such as buildings and traffic routes, in order to produce a model to act as a foundation to develop the digital simulation”. As part of his work, the PhD student analyzed INSEE databases to try and reproduce the socioeconomic characteristics of the Alès population. Orengo used a specialized platform for building agent simulations to create his own. “This platform allows researchers without computer programming training to create models, controlling various parameters that they define themselves,” explains the doctoral student. One of the limitations of this kind of simulation is computing power, which means only a certain number of variables can be taken into account. According to Orengo, his model still needs many improvements. These include “integrating individual vehicles, public transport, decision-making processes relating to risk management and more detailed reproduction of human behaviors”, he specifies. For Lancel, virtual reality could be an important addition, increasing participants’ immersion in the study, “By placing a participant in a virtual crowd, researchers could observe how they react to certain agents and their environment, which would allow them to refine their research,” she concludes.

Rémy Fauvel

MuTAS, urban mobility

“En route” to more equitable urban mobility, thanks to artificial intelligence

Individual cars represent a major source of pollution. But how can you transition from using your own car when you live far from the city center, in an area with little access to public transport? Andrea Araldo, researcher at Télécom SudParis is undertaking a research project that aims to redesign city accessibility, to benefit those excluded from urban mobility.

The transport sector is responsible for 30% of greenhouse gas emissions in France. And when we look more closely, the main culprit appears clearly: individual cars, responsible for over half of the CO2 discharged into the atmosphere by all modes of transport.

To protect the environment, car drivers are therefore thoroughly encouraged to avoid using their car, instead opting for a means of transport that pollutes less. However, this shift is impeded by the uneven distribution of public transport in urban areas. Because while city centers are generally well connected, accessibility proves to be worse on the whole in the suburbs (where walking and waiting times are much longer). This means that personal cars appear to be the only viable option in these areas.

The MuTAS (Multimodal Transit for Accessibility and Sustainability) project, selected by the National Research Agency (ANR) as part of the 2021 general call for projects, aims to reduce these accessibility inequalities at the scale of large cities. The idea is to provide the keys to offering a comprehensive, equitable and multimodal range of mobility options, combining public transport with fixed routes and schedules with on-demand transport services, such as chauffeured cars or rideshares. These modes of transport could pick up where buses and trains leave off in less-connected zones. “In this way, it is a matter of improving accessibility of the suburbs, which would allow residents to leave their personal car in the garage and take public transport, thereby contributing to reducing pollution and congestion on the roads”, says Andrea Araldo, researcher at Télécom SudParis and head of the MuTAS project, but formerly a driving school owner and instructor!

Improving accessibility without sending costs sky-high

But how can on-demand mobility be combined with the range of public transport, without leading to overblown costs for local authorities? The budget issue remains a central challenge for MuTAS. The idea is not to deploy thousands of vehicles on-demand to improve accessibility, but rather to make public transport more equitable within urban areas, for an equivalent cost (or with a limited increase).

This means that many questions must be answered, while respecting this constraint. In which zones should on-demand mobility services be added? How many vehicles need to be deployed? How can these services be adapted to different times throughout the day? And there are also questions regarding public transport. How can bus and train lines be optimized, to efficiently coordinate with on-demand mobility? Which are the best routes to take? Which stations can be eliminated, definitively or only at certain times?

To resolve this complex optimization issue, Araldo and his teams have put forward a strategy using artificial intelligence, in three phases.

Optimizing a graph…

The first involves modeling the problem in the form of a graph. In this graph, the points correspond to bus stops or train stations, with each line represented by a series of arcs, each with a certain trip time. “What must be noted here is that we are only using real-life, public data,” emphasizes Araldo. “Other research has been undertaken around these issues, but at a more abstract level. As part of MuTAS, we are using openly available, standardized data, provided by several cities around the world, including routes, schedules, trip times etc., but also population density statistics. This means we are modeling real public transport systems.” On-demand mobility is also added to the graph in the form of arcs, connecting less accessible areas to points in the network. This translates the idea of allowing residents far from the city center to get to a bus or train station using chauffeured cars or rideshares.

L’attribut alt de cette image est vide, son nom de fichier est Schema-1024x462.png.
To optimize travel in a certain area, researchers start by modeling public transport lines with a graph.

…using artificial intelligence

This modeled graph acts as the starting point for the second phase. In this phase, a reinforcement learning algorithm is introduced, a method from the field of machine learning. After several iterations, this is what will determine what improvements need to be made to the network, for example, deactivating stations, eliminating lines, adding on-demand mobility services, etc. “Moreover, the system must be capable of adapting its structure dynamically, according to shifts in demand throughout the day,” adds the researcher. “The traditional transport network needs to be dense and extended during peak hours, but it can contract significantly in off-peak hours, with on-demand mobility taking over for the last kilometers, which is more efficient for lower numbers of passengers.”

And that is not the only complex part. Various decisions influence each other: for example, if a bus line is removed from a certain place, more rideshares or chauffeured car services will be needed to replace it. So, the algorithm applies to both public transport and on-demand mobility. The objective will therefore be to reach an optimal situation in terms of equitable distribution of accessibility.

But how can this accessibility be evaluated? There are multiple methods to do so, but researchers have chosen two adapted methods for graph optimization. The first is a ‘velocity score’, corresponding to the maximum distance that can be traveled from a departure point in a limited time (30 minutes for example). The second is a ‘sociality score’, representing the number of people that one can meet from a specific area, also within a limited time.

In concrete terms, the algorithm will take an indicator as a reference, i.e. a measure of the accessibility for the least accessible place in the area. The aim being to make transport options as equitable as possible, it will aim to optimize this indicator (‘max-min’ optimization), while respecting certain restrictions such as cost. To achieve this, it will make a series of decisions concerning the network, initially in a random way. Then, at the end of each iteration, by analyzing the flow of passengers, it will calculate the associated ‘reward’, the improvement in the reference indicator. The algorithm will then stop when the optimum is reached, or else after a pre-determined period.

This approach will allow it to establish knowledge of its environment, associating each network structure (according to the decisions made) with the expected reward. “The advantage of such an approach is that once the algorithm is trained, the knowledge base can be used for another network,” explains Araldo. “For example, I can use the optimization performed for Paris as a starting point for a similar project in Berlin. This represents a precious time-saver compared to traditional methods used to structure transport networks, in which you have to start each new project from zero.”

Testing results on (virtual) users in Ile-de-France

Lastly, the final phase aims to validate the results obtained using a detailed model. While the models from the first phase aim to reproduce reality, they only represent a simplified version. This is important, given that they will then be used for various iterations, as part of the reinforcement learning process. If they had a very high level of detail, the algorithm would require a huge amount of computing power, or too much processing time.

The third phase therefore involves first delicately modeling the transport network in an urban area (in this case, the Ile-de-France region), still using real-life data, but more detailed this time. To integrate all this information, researchers use a simulator called SimMobility, developed at MIT in a project to which Araldo contributed. The tool makes it possible to simulate the behavior of populations at an individual level, each person represented by an ‘agent’ with their own characteristics and preferences (activities planned during the days, trips to take, desire to reduce walking time or minimize number of changes, etc.). ‎It was based on the work of Daniel McFadden (Nobel Prize for Economics in 2000) and Moshe Ben-Akiva on ‘discrete choice models’, which makes it possible to predict choices between multiple modes of transport.

With the help of this simulator and public databases (socio-demographic studies, road networks, numbers of passengers, etc.), Araldo and his team, in collaboration with MIT, will generate a synthetic population, representing Ile-de-France users, with a calibration phase. Once the model faithfully reproduces reality, it will be possible to submit it to the new optimized transport system and simulate user reactions. “It is important to always remember that it’s only a simulation,” reminds the researcher. “While our approach allows us to realistically predict user behavior, it certainly does not correspond 100% to reality. To get closer, more detailed analysis and deeper collaborations with transport management bodies will be needed.”

Nevertheless, results obtained could serve to support more equitable urban mobility and in time, reduce its environmental footprint. Especially since the rise of electric vehicles and automation could increase the environmental benefits. However, according to Araldo, “electric, self-driving cars do not represent a miracle solution to save the planet. They will only prove to be a truly eco-friendly option as part of a multimodal public transport network.”

Bastien Contreras

New technologies to prevent post-operative hernias

Baptiste PILLET, Mines Saint-Etienne – Institut Mines-Télécom

The abdomen experiences intra-abdominal pressure, which varies according to the volume of organs, respiration, muscle activation and any physiological activity. As a consequence, the abdomen must resist forces exerted by this pressure, which can at times be high when coughing or vomiting, or during pregnancy. Certain illnesses such as obesity, paired with high intra-abdominal pressure, can lead to a hernia forming.

A hernia is when an organ, such as the small intestine, pushes through a natural opening, leaving its original cavity. It is a pathological protrusion, most often caused by weakness in the tissue that fails to resist the pressure from the organ. Factors such as obesity or repeatedly carrying heavy loads can increase this internal pressure, thereby making it more likely for the balance between tissue and organs to be disrupted.

It is a common condition, accounting for over 100,000 operations in France in 2020. If a hernia worsens, it can lead to bowel obstruction, which is why surgery is often preferred as a prophylaxis (preventively). Surgery involves reducing the protrusion and returning the intestine to its cavity.

Inguinal hernias are when the hernia is located just above the groin crease, whereas femoral hernias are located below the groin crease. Umbilical hernias occur near the navel, and lastly, epigastric hernias are located between the abdominal muscles, above the navel. In general, femoral hernias are more common in women and more complicated than inguinal hernias, which are more common in men. Umbilical hernias often occur after the umbilical orifice does not close correctly, and are therefore more common in infants.

Reducing hernias after abdominal surgery

After abdominal surgery, the resistance and mechanical behavior of the abdominal wall may be disrupted, which can lead to an incisional hernia (also known as an ‘eventration’). During a laparotomy (vertical incision of the abdomen), the linea alba (connective tissue between the rectus abdominis) presents areas of weakness after scarring over, which may later reopen. Such incisions of the abdomen may be necessary in around a hundred operations (organ transplant, cesarean section, etc.) and yet they lead to up to 11% of incisional hernias.

Although at present there is no means to detect and prevent abdominal hernias (natural or incisional), efforts have been made to reduce the rate of complications. From now on, in the majority of abdominal reconstructions (during a laparotomy or hernia repair), mesh is inserted between the various layers of muscle to strengthen the abdominal wall and therefore reduce the risk of recurrent hernia or eventration.

When such mesh is not used, the rate of recurrence is around 50%. Approximately 400,000 abdomen repairs using mesh take place each year in Europe, representing a cost of around €3.2 billion. This makes it one of the most common general surgeries, and yet, the rate of recurrence is still far too high (between 14 and 44%). Even 1% fewer recurrences would save €32 million a year.

The reinforcement mesh used has the purpose of strengthening areas of weakness during scarring and filling orifices to rebuild the abdominal wall. The surrounding biological tissue will then colonize the implant to return to a state close to the original. At present, the mesh is manufactured with resorbable or non-resorbable synthetic fibers, sometimes with derivatives from organic tissue (dermis or submucosa from human, porcine or bovine small intestine). It is characterized by the size of the pores, fiber diameter and thickness, etc. as well as mechanical characteristics, such as its resistance to stretching, bending, rupture, etc.

Better understanding recurrence

Mechanical tests and postoperative monitoring with imaging are taking place to understand the rate of recurrence, which remains too high. Often the mesh does not present the same mechanical behavior and therefore does not reproduce and adapt to that of the abdominal wall in the best way (mesh too rigid, for example). While the mechanical behavior of the mesh and abdominal wall has been relatively well studied in the literature, there remains a lack of understanding around the mesh’s integration in the abdomen environment. The initially implanted mesh will evolve in its behavior and effect on the abdominal wall over time as it integrates into the surrounding tissue. Moreover, it has been observed that the mesh has a tendency to contract or even deteriorate over time.

Digital models representing the abdomen and its repair are starting to be developed. Similarly, while more and more innovative research is appearing, there remains a lack of understanding around the high rate of recurrence, due to a shortage of data on these digital models. Specifically, there is no simulation that makes it possible to study and faithfully predict how the abdominal wall reopens, even when the mesh has been implanted.

With the aim of filling this knowledge gap, an animal study is underway to observe the role of mesh in reconstructing the abdominal wall following an incisional hernia.

The mechanical characteristics will be studied at multiple postoperative intervals through mechanical tests, and the integration of the mesh will be closely monitored thanks to medical imaging. At the same time, a digital model will be developed to represent the abdomen and its components (various layers of muscle, connective tissues, etc.) as accurately as possible.

The mechanical data will be then implemented into the model to analyze the mesh’s integration into its environment, as well as its effects over time. According to the placement of the textile, how it is attached and the physiological activity, it will be able to predict whether or not a reopening will occur, where it will arise and whether it will spread. This digital model could allow for better understanding of the abdominal wall mesh repair process and thereby improve implants, surgical techniques and consequently, treatment outcomes.

Baptiste Pillet, Lecturer-Researcher and Biomechanics PhD student, Mines Saint-Etienne – Institut Mines-Télécom

This article was republished from The Conversation under the Creative Commons license. Read the original article here (in French).