France’s elderly care system on the brink of crisis

In his book Les Fossoyeurs (The Gravediggers), independent journalist Victor Castanet challenges the management of the private elderly care facilities of world leader Orpea, reopening the debate around the economic model – whether for-profit or not – of these structures. Ilona Delouette and Laura Nirello, Health Economics researchers at IMT Nord Europe, have investigated the consequences of public (de)regulation in the elderly care sector. Here, we decode a system that is currently on the brink of crisis.

The Orpea scandal has been at the center of public debate for several weeks. Rationing medications and food, a system of kickback bribes, cutting corners on tasks and detrimental working conditions are all practices that Orpea group is accused of by Victor Castenet in his book,”Les Fossoyeurs”. Through this abuse in private facilities is currently casting aspersions on the entire sector, professionals, families, NGOs, journalists and researchers have been denouncing such dysfunction for several years.

Ilona Delouette and Laura Nirello, Health Economics researchers at IMT Nord Europe, have been studying public regulation in the elderly care sector since 2015. During their inquiries, the two researchers have met policy makers, directors and employees of these structures. They came to the same conclusion for all the various kinds of establishments: “in this sector, the challenging burden of funding is now omnipresent, and working conditions and services provided have been continuously deteriorating,” emphasizes Laura Nirello. For the researchers, these new revelations about the Orpea group reveal a basic trend more than anything else: the progressive competition between these establishments and cost-cutting imperatives has put more and more distance between them and their original missions.

From providing care for dependents…

In 1997, to deal with the growth in the number of dependent elderly, the category of nursing homes known as ‘Ehpad’ was created. “Since the 1960s, there has been debate around providing care for the elderly with decreasing autonomy from the public budget. In 1997, the decision was made to remove loss of autonomy from the social security system; it became the responsibility of the departments,” explained Delouette. From then on, public organizations, such as those in the social and solidarity-based economy (SSE), entered into competition with private, for-profit establishments.  25 years later, out of the 7,400 nursing homes in France that house a little less than 600,000 residents, nearly 50% of them are public, around 30% are private, not-for-profit (SSE) and around 25% are private and for-profit.

The (complex) funding of these structures, regulated by regional health agencies (ARS) and departmental councils, is organized into three sections: the ‘care’ section (nursing personnel, medical material, etc.) handled by the French public health insurance body Assurance Maladie; the ‘dependence’ section (daily life assistance, carers, etc.) managed by the departments via the personal autonomy benefit (APA); and the final section, accommodation fees, which covers lodgings, activities and catering, at the charge of residents and their family.

“Public funding is identical for all structures, whether private — for-profit or not-for-profit — or public. It’s often the cost of accommodation, which is less regulated, that receives media coverage, as it can run to several thousand euros,” emphasizes Nirello. “And it is mainly on this point that we see major disparities, justified by the private for-profit sector by higher real estate costs in urban areas. But it’s mainly because half of these places are owned by companies listed on the stock market, with the profitability demands that this involves,” she continues. And while companies are facing a rise in dependence and need for care from their residents, funding is frozen.

…to the financialization of the elderly

A structure’s resources are determined by its residents’ average level of dependency, transposed to working time. This is evaluated according to the AGGIR table (“Autonomie Gérontologie Groupes Iso-Ressources” or Autonomy Gerontology Iso-Resource Groups): GIR 1 and 2 correspond to a state of total or severe dependence, GIR 6 to people who are completely independent. Nearly half of nursing home residents belong to GIR 1 and 2, and more than a third to GIR 3 and 4. “While for-profit homes are positioned for very high dependence, public and SSE establishments seek to have a more balanced mix. They are often older and have difficulties investing in new, adapted facilities to handle highly dependent residents,” indicates Nirello. Paradoxically, the rate of assistants to residents is very different according to a nursing home’s status: 67% for public homes, 53% for private not-for-profit and 49% for private for-profit.

In the context of tightening public purse strings, this goes alongside a phenomenon of extreme corner-cutting for treatment, with each operation charged for. “Elderly care nurses need time to take care of residents: autonomy is fundamentally connected to social interaction,” insists Delouette. The Hospital, Patients, Health, Territories law strengthened competition between the various structures: from 2009, new authorizations for nursing home creation and extension were established based on calls for project issued by ARSs. For the researcher, “this system once again places groups of establishments in competition for new locations, as funding is awarded to the best option in terms of price and service quality, no matter its status. We know who wins: 20% public, and 40-40 for private for-profit/not-for-profit. What we don’t know is who responds to these calls for project. With tightened budgets, is the public sector no longer responding or is this a choice by regulators in the field?”

What is the future for nursing homes?

“Funding, cutting corners, a managerial view of caring for dependents: the entire system needs to be redesigned. We don’t have solutions, we are making observations,” emphasizes Nirello.But despite promises, reform has been delayed too long.The Elderly and Autonomy law, the most recent effort in this area, was announced by the current government and buried in late 2019, despite two parliamentary reports highlighting the serious aged care crisis (the mission for nursing homes in March 2018 and the Libault report in March 2019).

In 2030, Insee estimates that there will be 108,000 more dependent elderly people; 4 million in total in 2050. How can we prepare for this demographic evolution, currently underway? Just to cover the increased costs of caring for the elderly with loss of autonomy, it would take €9 billion every year until 2030. “We can always increase funding; the question is how we fund establishments. If we continue to try to cut corners on care and tasks, this goes against the social mission of these structures. Should vulnerable people be sources of profit? Is society prepared to invest more in taking care of dependent people?” asks Delouette. “This is society’s choice.” The two researchers are currently working on the management of the pandemic in nursing homes. For them, there is still a positive side to all this: the state of elderly care has never been such a hot topic.

Anne-Sophie Boutaud

Also read on I’MTech:

Joe Wiart

Télécom Paris | Millimeter waves, Electromagnetism, Dosimetry

[toggle title=”Find all his articles on I’MTech” state=”open”]

[/toggle]

AI-4-Child “Chaire” research consortium: innovative tools to fight against childhood cerebral palsy

In conjunction with the GIS BeAChild, the AI-4-Child team is using artificial intelligence to analyze images related to cerebral palsy in children. This could lead to better diagnoses, innovative therapies and progress in patient rehabilitation. But also a real breakthrough in medical imaging.

The original version of this article was published on the IMT Atlantique website, in the News section.

Cerebral palsy is the leading cause of motor disability in children, affecting nearly two out of every 1,000 newborns. And it is irreversible. The AI-4-Child chaire (French research consortium), managed by IMT Atlantique and the Brest University Hospital, is dedicated to fighting this dreaded disease, using artificial intelligence and deep learning, which could eventually revolutionize the field of medical imaging.

“Cerebral palsy is the result of a brain lesion that occurs around birth,” explains François Rousseau, head of the consortium, professor at IMT Atlantique and a researcher at the Medical Information Processing Laboratory (LaTIM, INSERM unit). “There are many possible causes – prematurity or a stroke in utero, for example. This lesion, of variable importance, is not progressive. The resulting disability can be more or less severe: some children have to use a wheelchair, while others can retain a certain degree of independence.”

Created in 2020, AI-4-Child brings together engineers and physicians. The result of a call for ‘artificial intelligence’ projects from the French National Research Agency (ANR), it operates in partnership with the company Philips and the Ildys Foundation for the Disabled, and benefits from various forms of support (Brittany Region, Brest Metropolis, etc.). In total, the research program has a budget of around €1 million for a period of five years.

Chaire AI-4-Child, François Rousseau
François Rousseau, professor at IMT Atlantique and head of the AI-4-Child chaire (research consortium)

Hundreds of children being studied in Brest

AI-4-Child works closely with BeAChild*, the first French Scientific Interest Group (GIS) dedicated to pediatric rehabilitation, headed by Sylvain Brochard, professor of physical medicine and rehabilitation (MPR). Both structures are linked to the LaTIM lab (INSERM UMR 1101), housed within the Brest CHRU teaching hospital. The BeAChild team is also highly interdisciplinary, bringing together engineers, doctors, pediatricians and physiotherapists, as well as psychologists.

Hundreds of children from all over France and even from several European countries are being followed at the CHRU and at Ty Yann (Ildys Foundation). By bringing together all the ‘stakeholders’ – patients and families, health professionals and imaging specialists – on the same site, Brest offers a highly innovative approach, which has made it a reference center for the evaluation and treatment of cerebral palsy. This has enabled the development of new therapies to improve children’s autonomy and made it possible to design specific applications dedicated to their rehabilitation.

“In this context, the mission of the chair consists of analyzing, via artificial intelligence, the imagery and signals obtained by MRI, movement analysis or electroencephalograms,” says Rousseau. These observations can be made from the fetal stage or during the first years of a child’s life. The research team is working on images of the brain (location of the lesion, possible compensation by the other hemisphere, link with the disability observed, etc.), but also on images of the neuro-musculo-skeletal system, obtained using dynamic MRI, which help to understand what is happening inside the joints.

‘Reconstructing’ faulty images with AI

But this imaging work is complex. The main pitfall is the poor quality of the images collected, due to camera shake or artifacts during the shooting. So AI-4-Child is trying to ‘reconstruct’ them, using artificial intelligence and deep learning. “We are relying in particular on good quality views from other databases to achieve satisfactory resolution,” explains the researcher. Eventually, these methods should be able to be applied to routine images.

Significant progress has already been made. A doctoral student is studying images of the ankle obtained in dynamic MRI and ‘enriched’ by other images using AI – static images, but in very high resolution. “Despite a rather poor initial quality, we can obtain decent pictures,” notes Rousseau.  Significant differences between the shapes of the ankle bone structure were observed between patients and are being interpreted with the clinicians. The aim will then be to better understand the origin of these deformations and to propose adjustments to the treatments under consideration (surgery, toxin, etc.).

The second area of work for AI-4-Child is rehabilitation. Here again, imaging plays an important role: during rehabilitation courses, patients’ gait is filmed using infrared cameras and a system of sensors and force plates in the movement laboratory at the Brest University Hospital. The ‘walking signals’ collected in this way are then analyzed using AI. For the moment, the team is in the data acquisition phase.

Several areas of progress

The problem, however, is that a patient often does not walk in the same way during the course and when they leave the hospital. “This creates a very strong bias in the analysis,” notes Rousseau. “We must therefore check the relevance of the data collected in the hospital environment… and focus on improving the quality of life of patients, rather than the shape of their bones.”

Another difficulty is that the data sets available to the researchers are limited to a few dozen images – whereas some AI applications require several million, not to mention the fact that this data is not homogeneous, and that there are also losses. “We have therefore become accustomed to working with little data,” says Rousseau. “We have to make sure that the quality of the data is as good as possible.” Nevertheless, significant progress has already been made in rehabilitation. Some children are able to ride a bike, tie their shoes, or eat independently.

In the future, the AI-4-Child team plans to make progress in three directions: improving images of the brain, observing bones and joints, and analyzing movement itself. The team also hopes to have access to more data, thanks to a European data collection project. Rousseau is optimistic: “Thanks to data processing, we may be able to better characterize the pathology, improve diagnosis and even identify predictive factors for the disease.”

* BeAChild brings together the Brest University Hospital Centre, IMT Atlantique, the Ildys Foundation and the University of Western Brittany (UBO). Created in 2020 and formalized in 2022 (see the French press release), the GIS is the culmination of a collaboration that began some fifteen years ago on the theme of childhood disability.

Déchets plastiques, plastic waste

Plastic waste: transforming a problem into energy

The linear life cycle of plastics, too rarely recycled, exerts a heightened pressure on the environment. The process of gasification makes it possible to reduce this impact by transforming more waste – currently incinerated or left in landfill – into useful resources like hydrogen or biomethane. Javier Escudero, process engineering researcher at IMT Mines Albi, aims to perfect this approach to make it easier to implement locally.

Plastic waste is so abundant, it is practically like rain. While we know well that it is piling up in landfills and oceans, an American study published in the journal Science has also shown that plastic particles are even in the air we breathe. Despite increased levels of pollution, international plastic production continues its explosive growth. However, at the end of the chain, the recycling industry has never managed to keep up with consumption. In France, the average recycling rate for all plastics is 28%, a percentage mainly obtained from bottle recycling (54.5% of the total). The vast majority of these materials is therefore incinerated or sent to landfill.

In order to respond efficiently to the plastic crisis, other forms of reuse or recycling must be developed. “Gasification means we can transform this waste into useful energy vectors, while losing as little material as possible”, explains Javier Escudero, Process Engineering Researcher at IMT Mines Albi. It is an alternative that contributes to a circular economy approach.

Some plastics not so fantastic

Rigid plastics used for bottles are generally made from a single material, which makes it easier to recycle them. For plastic films, which represent 40% of waste deposits, this is not the case. They are made from a multilayer combination of various plastics, such as polyethylene, polyurethane, and so on, sometimes joined with other materials like cardboard. The complex configuration of chemicals makes recycling such packaging too expensive. This means that in recycling centers, these products are overwhelmingly used for solid recovered fuel (SRF) – non-hazardous waste used for energy production. They are incinerated to feed turbines and generate electricity.

Another kind of waste that is ineligible for recycling is packaging from chemical products (industrial and mass market), considered hazardous. Some of the toxic compounds (chorine, sulfur, metals, etc.) are removed from the surface by pre-washing. However, certain atoms are absorbed into the material and cannot be removed by prewashing. This is where the advantages of gasification come in. “It makes it possible to process all plastics – SRF and contaminated ones – with less prewashing beforehand, as well,” emphasizes Escudero.

Moreover, this process has greater capacity for recycling plastic waste than incineration, as it produces chemical compounds that can be reused by industry, The synthesis gases can be burnt to generate energy (heat, electricity) with better yield than combustion. They can also be reprocessed and stored in the form of gas to be used as fuel (biomethane, hydrogen). To achieve this, one of the challenges of research is to observe the influence of pollutants, and therefore the composition of plastics, on products obtained from gasification.

Transforming materials down to the last crumb

Ground-up waste is compacted in the form of pellets, all the same size, to facilitate their transformation into gas in the gasifier. But if you want to recycle as much waste as possible, you need to adapt the gasification operating parameters, depending on the types of plastics contained in the pellets. For example, processing at a low temperature will break the long chains of polymers in plastic films. The molecules are then broken up again in the next step, as is done in petrochemistry. This produces a wide variety of products: hydrogen, methane, acetylene, and heavier molecules as well.

Processing at a higher temperature will produce more synthesis gas. However, it also produces more aromatic molecules like benzene and naphthalene. These compounds have a very stable structure and are very difficult to break into useful molecules. They may turn into soot – solids that build up in pipes – representing a significant loss of materials. The objective of Escudero’s research into gasification is therefore to combine the advantages of these two methods of processing, to avoid solid residue forming while producing as much gas as possible.

To do so, the researcher and his team are mainly focusing on gas injection, which breaks the molecular bonds of the materials being processed. Where and at which point in the process should injection take place? In connection to what? How does the material react? These questions, and many others, must be answered to improve the process.  The gasifier at the Valthera technological platform, located at IMT Mines Albi and used for the tests, can process around 20 kilograms of material per hour. The process recycles not only the materials but also their energy. “Gasification reactions require energy to occur. This means that we use the energy stored in the materials to power their transformation,” explains the researcher.

Use less, convert more

Hydrogen and biomethane obtained through gasification directly power the goals of the French energy transition. Gasification therefore transforms materials made from fossil fuels into renewable energy. However, this process remains restricted to the context of research. “There are still many small aspects to study in designing gasifiers, to make them higher-performing and more mature for a certain amount of material. We are also going to concentrate on purifying synthesis gases with the aim of finding even cheaper solutions,” concludes Escudero. Gasification could supplement waste management channels at a local level. However, cost remains the greatest obstacle to small industrial actors adopting this method.

Anaïs Culot

Planning for the worst with artificial intelligence

Given that catastrophic events are rare by nature, it is difficult to prepare for them. However, artificial intelligence offers high-performing tools for modeling and simulation, and is therefore an excellent tool to design, test and optimize the response to such events. At IMT Mines Alès, Satya Lancel and Cyril Orengo are both undertaking research on emergency evacuations, in case of events like a dam breaking or a terrorist attack in a supermarket.

“Supermarkets are highly complex environments in which individuals are saturated with information,” explains Satya Lancel, PhD student in Risk Science at Université Montpellier III and IMT Mines Alès. Her thesis, which she started over two years ago, is on the subject of affordance, a psychological concept that states that an object or element in the environment is able to suggest its own use. With this research, she wishes to study the link between the cognitive processes involved in decision-making and the functionality of objects in their environment.

In her thesis, Lancel specifically focuses on affordance in the case of an armed attack within a supermarket. She investigates, for example, how to optimize instructions to encourage customers to head towards emergency evacuation exits. “The results of my research could act as a foundation for future research and be used by supermarket brands to improve signage or staff training, in order to improve evacuation procedures”, she explains.

Lancel and her team obtained funding from the brand U to perform their experiments. This agreement allowed them to study the situational and cognitive factors involved in customer decision-making in several U stores. “One thing we did in the first part of my research plan was to observe customer behavior when we added or removed flashing lights at the emergency exits,” she describes. “We remarked that when there was active signage, customers are more likely to decide to head towards the emergency exits than when there was not,” says the scientist. This result suggests that signage has a certain level of importance in guiding people’s decision-making, even if they do not know the evacuation procedure in advance.

 “Given that it is forbidden to perform simulations of armed attacks with real people, we opted for a multi-agent digital simulation”, explains Lancel. What is unique about this kind of simulation is that each agent involved is conceptualized as an autonomous entity with its own characteristics and behavioral model. In these simulations, the agents interact and influence each other with their behavior. “These models are now used more and more in risk science, as they are proving to be extremely useful for analyzing group behavior,” she declares.

To develop this simulation, Lancel collaborated with Vincent Chapurlat, digital systems modeling researcher at IMT Mines Alès. “The model we designed is a three-dimensional representation of the supermarket we are working on,” she indicates. In the simulation, aisles are represented by parallepipeds, while customers and staff are represented by agents defined by points. By observing how agents gather and how the clusters they form move around, interact and organize themselves, it is possible to determine which group behaviors are most common in the event of an armed attack, no matter the characteristics of the individuals.

Representing the complexity of reality

Outside of supermarkets, Cyril Orengo, PhD student in Crisis Management at the Risk Science Laboratory at IMT Mines Alès, is studying population evacuation in the event of dam failure. The case study chosen by Orengo is the Sainte-Cécile-d’Andorge dam, near the city of Alès. Based on digital modeling of the Alès urban area and individuals, he plans to compare evacuation time for a range of scenarios and perform cartography of various city roads that are likely to be blocked. “One of the aims of this work is to build a knowledge base that could be used in the future by researchers working on preventive evacuations,” indicates the doctoral student.

He, too, has chosen to use a multi-agent system to simulate evacuations, as this method makes it possible to combine individual parameters with agents to produce situations that tend to be close to a credible reality. “Among the variables selected in my model are certain socio-economic characteristics of the simulated population,” he specifies. “In a real-life situation, an elderly person may take longer to go somewhere than a young person: the multi-agent system makes it possible to reproduce this,” explains the researcher.

To generate a credible simulation, “you first need to understand the preventive evacuation process,” underlines Orengo, specifying the need “to identify the actors involved, such as citizens and groups, as well as the infrastructure, such as buildings and traffic routes, in order to produce a model to act as a foundation to develop the digital simulation”. As part of his work, the PhD student analyzed INSEE databases to try and reproduce the socioeconomic characteristics of the Alès population. Orengo used a specialized platform for building agent simulations to create his own. “This platform allows researchers without computer programming training to create models, controlling various parameters that they define themselves,” explains the doctoral student. One of the limitations of this kind of simulation is computing power, which means only a certain number of variables can be taken into account. According to Orengo, his model still needs many improvements. These include “integrating individual vehicles, public transport, decision-making processes relating to risk management and more detailed reproduction of human behaviors”, he specifies. For Lancel, virtual reality could be an important addition, increasing participants’ immersion in the study, “By placing a participant in a virtual crowd, researchers could observe how they react to certain agents and their environment, which would allow them to refine their research,” she concludes.

Rémy Fauvel

MuTAS, urban mobility

“En route” to more equitable urban mobility, thanks to artificial intelligence

Individual cars represent a major source of pollution. But how can you transition from using your own car when you live far from the city center, in an area with little access to public transport? Andrea Araldo, researcher at Télécom SudParis is undertaking a research project that aims to redesign city accessibility, to benefit those excluded from urban mobility.

The transport sector is responsible for 30% of greenhouse gas emissions in France. And when we look more closely, the main culprit appears clearly: individual cars, responsible for over half of the CO2 discharged into the atmosphere by all modes of transport.

To protect the environment, car drivers are therefore thoroughly encouraged to avoid using their car, instead opting for a means of transport that pollutes less. However, this shift is impeded by the uneven distribution of public transport in urban areas. Because while city centers are generally well connected, accessibility proves to be worse on the whole in the suburbs (where walking and waiting times are much longer). This means that personal cars appear to be the only viable option in these areas.

The MuTAS (Multimodal Transit for Accessibility and Sustainability) project, selected by the National Research Agency (ANR) as part of the 2021 general call for projects, aims to reduce these accessibility inequalities at the scale of large cities. The idea is to provide the keys to offering a comprehensive, equitable and multimodal range of mobility options, combining public transport with fixed routes and schedules with on-demand transport services, such as chauffeured cars or rideshares. These modes of transport could pick up where buses and trains leave off in less-connected zones. “In this way, it is a matter of improving accessibility of the suburbs, which would allow residents to leave their personal car in the garage and take public transport, thereby contributing to reducing pollution and congestion on the roads”, says Andrea Araldo, researcher at Télécom SudParis and head of the MuTAS project, but formerly a driving school owner and instructor!

Improving accessibility without sending costs sky-high

But how can on-demand mobility be combined with the range of public transport, without leading to overblown costs for local authorities? The budget issue remains a central challenge for MuTAS. The idea is not to deploy thousands of vehicles on-demand to improve accessibility, but rather to make public transport more equitable within urban areas, for an equivalent cost (or with a limited increase).

This means that many questions must be answered, while respecting this constraint. In which zones should on-demand mobility services be added? How many vehicles need to be deployed? How can these services be adapted to different times throughout the day? And there are also questions regarding public transport. How can bus and train lines be optimized, to efficiently coordinate with on-demand mobility? Which are the best routes to take? Which stations can be eliminated, definitively or only at certain times?

To resolve this complex optimization issue, Araldo and his teams have put forward a strategy using artificial intelligence, in three phases.

Optimizing a graph…

The first involves modeling the problem in the form of a graph. In this graph, the points correspond to bus stops or train stations, with each line represented by a series of arcs, each with a certain trip time. “What must be noted here is that we are only using real-life, public data,” emphasizes Araldo. “Other research has been undertaken around these issues, but at a more abstract level. As part of MuTAS, we are using openly available, standardized data, provided by several cities around the world, including routes, schedules, trip times etc., but also population density statistics. This means we are modeling real public transport systems.” On-demand mobility is also added to the graph in the form of arcs, connecting less accessible areas to points in the network. This translates the idea of allowing residents far from the city center to get to a bus or train station using chauffeured cars or rideshares.

L’attribut alt de cette image est vide, son nom de fichier est Schema-1024x462.png.
To optimize travel in a certain area, researchers start by modeling public transport lines with a graph.

…using artificial intelligence

This modeled graph acts as the starting point for the second phase. In this phase, a reinforcement learning algorithm is introduced, a method from the field of machine learning. After several iterations, this is what will determine what improvements need to be made to the network, for example, deactivating stations, eliminating lines, adding on-demand mobility services, etc. “Moreover, the system must be capable of adapting its structure dynamically, according to shifts in demand throughout the day,” adds the researcher. “The traditional transport network needs to be dense and extended during peak hours, but it can contract significantly in off-peak hours, with on-demand mobility taking over for the last kilometers, which is more efficient for lower numbers of passengers.”

And that is not the only complex part. Various decisions influence each other: for example, if a bus line is removed from a certain place, more rideshares or chauffeured car services will be needed to replace it. So, the algorithm applies to both public transport and on-demand mobility. The objective will therefore be to reach an optimal situation in terms of equitable distribution of accessibility.

But how can this accessibility be evaluated? There are multiple methods to do so, but researchers have chosen two adapted methods for graph optimization. The first is a ‘velocity score’, corresponding to the maximum distance that can be traveled from a departure point in a limited time (30 minutes for example). The second is a ‘sociality score’, representing the number of people that one can meet from a specific area, also within a limited time.

In concrete terms, the algorithm will take an indicator as a reference, i.e. a measure of the accessibility for the least accessible place in the area. The aim being to make transport options as equitable as possible, it will aim to optimize this indicator (‘max-min’ optimization), while respecting certain restrictions such as cost. To achieve this, it will make a series of decisions concerning the network, initially in a random way. Then, at the end of each iteration, by analyzing the flow of passengers, it will calculate the associated ‘reward’, the improvement in the reference indicator. The algorithm will then stop when the optimum is reached, or else after a pre-determined period.

This approach will allow it to establish knowledge of its environment, associating each network structure (according to the decisions made) with the expected reward. “The advantage of such an approach is that once the algorithm is trained, the knowledge base can be used for another network,” explains Araldo. “For example, I can use the optimization performed for Paris as a starting point for a similar project in Berlin. This represents a precious time-saver compared to traditional methods used to structure transport networks, in which you have to start each new project from zero.”

Testing results on (virtual) users in Ile-de-France

Lastly, the final phase aims to validate the results obtained using a detailed model. While the models from the first phase aim to reproduce reality, they only represent a simplified version. This is important, given that they will then be used for various iterations, as part of the reinforcement learning process. If they had a very high level of detail, the algorithm would require a huge amount of computing power, or too much processing time.

The third phase therefore involves first delicately modeling the transport network in an urban area (in this case, the Ile-de-France region), still using real-life data, but more detailed this time. To integrate all this information, researchers use a simulator called SimMobility, developed at MIT in a project to which Araldo contributed. The tool makes it possible to simulate the behavior of populations at an individual level, each person represented by an ‘agent’ with their own characteristics and preferences (activities planned during the days, trips to take, desire to reduce walking time or minimize number of changes, etc.). ‎It was based on the work of Daniel McFadden (Nobel Prize for Economics in 2000) and Moshe Ben-Akiva on ‘discrete choice models’, which makes it possible to predict choices between multiple modes of transport.

With the help of this simulator and public databases (socio-demographic studies, road networks, numbers of passengers, etc.), Araldo and his team, in collaboration with MIT, will generate a synthetic population, representing Ile-de-France users, with a calibration phase. Once the model faithfully reproduces reality, it will be possible to submit it to the new optimized transport system and simulate user reactions. “It is important to always remember that it’s only a simulation,” reminds the researcher. “While our approach allows us to realistically predict user behavior, it certainly does not correspond 100% to reality. To get closer, more detailed analysis and deeper collaborations with transport management bodies will be needed.”

Nevertheless, results obtained could serve to support more equitable urban mobility and in time, reduce its environmental footprint. Especially since the rise of electric vehicles and automation could increase the environmental benefits. However, according to Araldo, “electric, self-driving cars do not represent a miracle solution to save the planet. They will only prove to be a truly eco-friendly option as part of a multimodal public transport network.”

Bastien Contreras