Antenna 5G infrastructure

Mathematical tools to meet the challenges of 5G

The arrival of 5G marks a turning point in the evolution of mobile telecommunications standards. In order to cope with the constant increase in data traffic and the requirements and constraints of future uses, teams at Télécom SudParis and Davidson Consulting have joined forces in the AIDY-F2N joint laboratory. Their objective is to provide mathematical and algorithmic solutions to optimize the 5G network architecture.

 

Before the arrival of 5G, which is expected to be rolled out in Europe in 2020, many scientific barriers remain to be overcome. “5G will concern business networks and certain industrial sectors that have specific needs and constraints in terms of real time, security and mobility. In order for these extremely diverse uses to coexist, 5G must be capable of adapting” presents Badii Jouaber, telecommunications researcher at Télécom SudParis. To meet this challenge, he is piloting a new joint laboratory between Télécom SudParis and Davidson Consulting which was launched in early 2020. The main objective of this collaboration is to use artificial intelligence and mathematical modeling technologies to meet the requirements of new 5G applications.

Read on I’MTech: What is 5G?

Configuring custom networks

In order to support levels of service adapted to both business and consumer uses, 5G uses the concept of network slicing. The network is thus split into several virtual “slices” operated from a common shared infrastructure. Each of these slices can be configured to deliver an appropriate level of performance in terms of reliability, latency, bandwidth capacity or coverage. 5G networks will thus have to be adaptable, dynamic and programmable from end to end by means of virtual structures.

“Using slicing for 5G means we can meet these needs simultaneously and in parallel. Each slice of the network will thus correspond to a use, without encroaching on the others. However, this coexistence is very difficult to manage. We are therefore seeking to improve the dynamic configuration of these new networks in order to manage resources optimally. To do so, we are developing mathematical and algorithmic analysis tools. Our models, based on machine learning techniques, among other things, will help us to manage and reconfigure these networks on a permanent basis,” says Badii Jouaber. Networks that can therefore be set up, removed, expanded or reduced according to demand.

A priority for Davidson Consulting

Anticipating issues with 5G is one of the priorities of Davidson Consulting. The company is present in major cities in France and abroad, with 3,000 employees. It was co-founded in 2005 by Bertrand Bailly, a former Télécom SudParis student, and is a major player in telecoms and information systems. “For 15 years we have been carrying out expert assessment for operators and manufacturers. The arrival of 5G brings up new issues. For us, it is essential to contribute to these issues by putting our expertise to good use. It’s also an opportunity to support our clients and help them overcome these challenges”, says David Olivier, Director of Research and Development at Davidson. For him, it is thus necessary to take certain industrial constraints into account from the very first stages of research, so that their work can be operational quickly.

Another one of our goals is to achieve energy efficiency. With the increase in the number of connected objects, we believe it is essential to develop these new models of flexible, ultra-dynamic and configurable mobile networks, to minimize and reduce their impact by optimizing energy consumption”, David Olivier continues.

Bringing technology out of the labs for the networks of the future

The creation of the AIDY-FN2 joint laboratory is the culmination of several years of collaboration between Télécom SudParis and Davidson Consulting, beginning in 2016 with the support of a thesis supervised by Badii Jouaber. “By initiating a new joint research activity, we aim to strengthen our common research interests around the networks of the future, and the synergies between academic research and industry. Our two worlds have much in common!” says David Olivier enthusiastically.

Under this partnership, the teams at Davidson Consulting and Télécom SudParis will coordinate and pool their skills and research efforts. The company has also provided experts in AI and Telecommunications modeling to co-supervise, with Badii Jouaber, the scientific team of the joint laboratory that will be set up in the coming months. This work will contribute to enhancing the functionality of 5G within a few years.

retardateurs de flamme

Do flame-retardant pillows pollute our homes?

Chemical additives called flame retardants prevent our furniture from burning too quickly in the event of a fire. But do these molecules pollute the air inside our homes and offices? To answer this question, an ANSES-ADEME research project was launched in 2019 and IMT Mines Alès is taking part in it. In a testing laboratory that reproduces the conditions of a room at full scale, the researchers are establishing a new methodology for studying pollutants found in furniture.

 

Since we are all at home under lockdown orders, the issue of indoor air pollution has become more important. We know that in general, cleaning products and burning candles pollute our homes, and that it is important to ventilate our living spaces, in order to renew the air. But our furniture and other decorative items also contain substances that can impact indoor air quality and health.

Certain additives are added to upholstered furniture – such as foams used in seats and bedding – in order to limit the spread of flames in the event of a fire. Until the 2000s, brominated compounds, PBDEs (PolyBromoDiphenyl Ethers), were used for this purpose. As they were considered to be too toxic, PBDEs were prohibited in Europe and in many other countries in 2005 and were replaced by new substances such as organophosphorus compounds.

In order to assess the risks of exposure to these new substances, the French Agency for Food, Environmental, and Occupational Health and Safety (ANSES) and the French Environmental and Energy Management Agency (ADEME) have teamed up to fund a research project in which Valérie Desauziers and Hervé Plaisance, researchers who study indoor air quality at IMT Mines Alès, are taking part. Their aim is to evaluate the transfer capacity of these organophosphorus compounds in upholstered furniture to the air.

Living room or laboratory?

The researchers are working in a real environment in order to study different ways molecules can  travel. Seats containing flame retardants, made especially for the project, have been installed in two identical unoccupied offices to reproduce an indoor environment.  In one of the offices, the seats were first subjected to accelerated aging to evaluate the long-term impact of the emission of these materials. Apart from that, everything is similar: “The temperature, humidity level and air change rate are measured throughout the test,” says Valérie Desauziers. “We’re never in an entirely enclosed, airtight environment, we have to take that into consideration in our study so that it is as similar as possible to real exposure conditions.”

The organophosphorus compounds studied are distinctive in that they are semi-volatile substances.  “The more the volatility of the compounds decreases, the more we experience analytical problems, especially sampling,” explains Hervé Plaisance. These semi-volatile compounds are spread out between the air and interior surfaces: “it’s a rather complex distribution in the indoor environment between  gaseous and particulate fractions,” he adds. To assess the behavior of these pollutants inside a room as well as the risk of exposure, it is not enough to study only their concentration in the air.

To take these properties into account, the researchers have developed an original sampling methodology. They use a small glass cylindrical cell that is simply placed on the material to be characterized. A fiber made from an absorbent material is then inserted into this cell in order to trap the molecules emitted by the material. The fiber is then analyzed in the laboratory. “This technique allows us to determine the concentration of pollutants at the interface between the material and the air, and therefore characterize the materials that are sources of pollutants as well as the deposition of pollutants on the other surfaces inside a room,” says Valérie Desauziers.

Photograph of the cell used to carry out measurements of the transfer of molecules to the air.

Over the course of the nine-month on-site study, measurements of the concentrations of flame retardants are carried out periodically in the air, on the surface of the emitting material and on the floor, walls, ceiling and bay window in an effort to understand the behavior of these semi-volatile molecules.

Choosing the least polluting materials

The sampling and analysis methodology developed is useful for identifying sources of pollutants in an indoor environment. “It can also help us choose and develop materials that emit fewer pollutants,” adds the researcher. “By measuring the concentrations in the air and on the surfaces, we can carry out a mass balance, which will allow us to better understand the transfer dynamics of these molecules and their distribution in indoor environments,” adds Hervé Plaisance. Ultimately, the goal is to be able to model these phenomena, incorporating parameters such as air change, the deposition of molecules on surfaces, and source emission.

“As far as the current project is concerned, it’s too early to give precise results about the quantities of flame retardants emitted by the upholstered furniture studied, but we have already shown that transfer to the air occurs,” say the researchers. This means that there is a risk of exposure through inhalation, but the risk has not yet been assessed. For these type of molecules, this field study in real conditions is the first of its kind. This research will then have to be continued, drawing on expertise in the field of health hazards in order to assess the health impact of these emerging pollutants.

 

Tiphaine Claveau for I’MTech

 

The IoT needs dedicated security – now

The world is more and more driven by networked computer systems. They dominate almost all aspects of our lives. These systems are connected to the Internet, resulting in a high threat potential. Marc-Oliver Pahl, chairholder of the cybersecurity chair Cyber CNI at IMT Atlantique, talks about what is at stakes when it comes to IoT security.

 

What is the importance of securing the Internet of things (IoT)?

Marc-Oliver Pahl: Securing the IoT is one of the, or even the most important challenge I see for computer systems at the moment. The IoT is ubiquitous. Most of us interact with it many times every day – only we are not aware of it as it surrounds us in the background. An example is the water supply system that brings drinking water to our houses. Other examples are the electricity grid, transportation, finance, or health care. The list is long. My examples are critical to our society. They are so-called “critical infrastructures.” If the IoT is not sufficiently protected, critical things can happen, such as water or power outages, or even worse, manipulated processes leading to bacteria in the water, faulty products that cause safety risks such as cars, and many more.

This strong need for security, combined with the fact that IoT devices are often not sufficiently secured, and at the same time connected to the Internet with all its threat potential, illustrates the importance of the subject. The sheer number of devices, with 41.6 billion of connected IoT devices expected by 2025, shows the urgent need for action: the IoT needs the highest security standards possible to protect our society.

Why are IoT networks so vulnerable?

MOP: I want to focus on two aspects here, the “Internet”, and the “Things”. As the name Internet of Things says, IoT devices are often connected to the Internet. This makes them connected to every single user of the Internet, including bad guys. Through the Internet, the bad guys can comfortably attack an IoT system at the other side of the planet without leaving their sofa. If an attacked IoT system is not sufficiently secured, attackers can succeed and compromise the system with potentially severe consequences to security, safety, and privacy.

The term “Thing” implies a broad range of entities and applications. Consequently, IoT systems are heterogeneous. This heterogeneity includes vendors, communication technology, hardware, or software. The IoT is a mash-up of such Things, making the resulting systems complex. Securing the IoT is a big challenge. Together with our partners at the chaire Cyber CNI, in our research we contribute every day to making the IoT more secure. Our upcoming digital PhD school from October 5-9, 2020 is a wonderful opportunity to get more insights.

What would be an example challenge that IoT security needs to address and how could it be addressed?

MOP: Taking the two areas from before, one thing we work on is ensuring that the access to IoT devices over the Internet is strictly limited. This can be done via diverse mechanisms including firewalls for defining and enforcing access policies, and Software Defined Networking for rerouting attackers away from their targets.

Regarding the heterogeneity, we look at how we can enable human operators to see what happens in the ambient IoT systems, how we can support them to express what security properties they want, and how we can build systems “secure-by-design”, so that they enforce the security policies. This is especially challenging as IoT systems are not static.

What makes securing IoT systems so difficult?

MOP: Besides the previously mentioned aspects, connectivity to the Internet and heterogeneity, a third major challenge of the IoT is its dynamicity: IoT systems continuously adapt to their environments. This is part of their job and a reason for their success. From a security-perspective, this dynamicity is a highly demanding challenge. On the one hand, we want to make the systems as restrictive as possible, to protect them as much as possible. On the other hand, we have to give the IoT systems enough room to breathe to fulfill their purpose.

Then, how can you provide security for such continuously changing systems?

MOP: First of all, security-by-design has to be applied properly, resulting in a system that applies all security-mechanisms appropriately, in a non-circumventable way. But this is not enough as we have seen before. The dynamic changes of a system cannot fully be anticipated with security-by-design mechanisms. They require the same dynamics at the defender side.

Therefore, we work on continuous monitoring of IoT systems, automated analysis of the monitoring data, and automated or adaptive defense mechanisms. Artificial Intelligence, or more-precisely Machine Learning can be of great help in this process as it allows the meaningful processing of possibly unexpected data.

More on this topic: What is the industrial internet of things?

If we are talking about AI, does this mean future security systems will be fully autonomous?

MOP: Though algorithms can do much, humans have to be in the loop at some point. This has multiple reasons, including our ability to analyze certain complex situations even better than machines. With the right data and expertise, humans outperform machines. This includes the important aspect of Ethics that is another story but central when building algorithms for autonomous IoT systems.

Another reason for the need of humans in-the-loop is that there is no objective measure for security. By that I mean that the desired security-level for a concrete system has to be defined by humans. Only they know what they want. Afterwards, computer systems can then take-over in what they are best-in, the extremely fast execution of complex tasks: enforcing that the high-level goals given by human operators are implemented in the corresponding IoT systems.

 

[box type=”info” align=”” class=”” width=””]

From October 5 to 7, IoT meets security Summer School

“IoT meets Security” is the 3rd edition of the Future IoT PhD school series. It will be 100 % digitalized to comply with current guidelines regarding the covid-19 pandemic. It gives an insight perspective from industry and academia to this hot topic. It will cover a broad range of settings, use cases, applications, techniques, and security philosophies, from humans over Information Technology (IT) to Operational Technology (OT), from research to industry.

The organizers come from the top research and education institutions in the heart of Europe, IMT Atlantique and Technische Universität München:

  • Marc-Oliver Pahl (IMT/TUM) is heading the industrial Chaire for Cybersecurity in Critical Networked Infrastructures (cyber-cni.fr) and the CNRS UMR LAB-STICC/IRIS. Together with his team, he is working on the challenges sketched above.
  • Nicolas Montavont (IMT) is heading the IoT research UMR IRISA/OCIF. With his team, he is constantly working on making the IoT more reliable and efficient.

Learn more and register here

[/box]

Metrics

A European project to assess the performance of robotic functions

Healthcare, maintenance, the agri-food industry, agile manufacturing. Metrics, a three-year H2020 project launched in January, is organizing robot competitions geared towards these four industries and is developing metrological methods to assess the data collected. TeraLab, IMT’s big data and AI platform, is a partner in this project. Interview with Anne-Sophie Taillandier, Director of TeraLab.

 

What is the aim of the European project Metrics?

Anne-Sophie Taillandier: The aim of Metrics (Metrological Evaluation and Testing of Robots in International Competitions) is threefold: First, it will organize robot competitions geared towards the industries in the four priority fields: healthcare, inspection and maintenance, agri-food, and agile manufacturing. The second goal is to develop metrological methods to assess the data provided by the robots. And lastly, the project will help structure the European robotics community around the competitions in the four priority sectors mentioned before.

Other European projects are organizing robot competitions to encourage scientific progress and foster innovation. How does Metrics stand out from these projects?

AST: One of the main things that makes the Metrics project different is that it aims to directly address the reliability and validity of AI algorithms during the competitions. To do so, the competitions must at once focus on the robot’s behavior in a physical environment and on the behavior of its AI algorithms when they are confronted with correctly qualified and controlled data sets. To the best of our knowledge, this question has not been addressed in previous European robotics competitions.

What are the challenges ahead?

AST: Ultimately, we hope to make the use of assessment tools and benchmarking widespread and ensure the industrial relevance of challenge competitions. We will also have to gain attention from industrial players, universities and the general public for the competitions  and ensure that the robots comply with ethical, legal, social and economic requirements.

How will you go about this?

AST: Metrics is developing an evaluation framework based on metrological principles in order to assess the reliability of the different competing robots in a thorough and impartial manner. For each competition, Metrics will organize 3 field evaluation campaigns  (in physical environments) and three cascade evaluation campaigns (on data sets) in order to engage with the AI community. Specific benchmarks for functions and tasks are defined in advance to assess the performance of robotic functions and the execution of specific tasks.

The Metrics partners have called upon corporate sponsors to support the competitions, verify their industrial relevance, contribute to an awareness program and provide effective communication.

How is TeraLab – IMT’s big data and AI platform – contributing to the project?

AST: First of all, TeraLab will provide sovereign, neutral spaces, enabling the Metrics partners and competitors to access data and software components in dedicated work spaces. TeraLab will provide the required level of security to protect intellectual property,  assets  and data confidentiality.

TeraLab and IMT are also in charge of the Data Management Plan setting out the rules for data management in Metrics, based on best practices for secure data sharing, with contributions from IMT experts in the fields of cybersecurity, privacy, ethics and compliance with GDPR (General Data Protection Regulation).

The consortium brings together 17 partners. Who are they?

AST: Coordinated by the French National Laboratory for Metrology and Testing (LNE), Metrics brings together 17 European partners: higher education and research institutions and organizations with expertise in the field of testing and technology transfer. They contribute expertise in robotic competitions and metrology. The partners provide test facilities and complementary networks throughout Europe in the four priority industrial areas.

What are the expected results?

AST: On a technological level, Metrics should encourage innovation in the field of robotic systems. It will also have a political impact with information for policymakers and support for robotic systems certification. And it will have a tangible socio-economic impact as well, since it will raise public awareness of robotic capacity and lead to greater engagement of commercial organizations in the four priority industries. All of this will help ensure the sustainability, at the European level, of the competition model for robotics systems that address socio-economic challenges.

Learn more about the Metrics project

Interview by Véronique Charlet for I’MTech

 

PREDIS

Innovating to improve radioactive waste management

The PREDIS European project aims to develop innovative activities for the management of radioactive waste, for which there is currently no solution. IMT Atlantique is one of the project’s seven work package leaders and will contribute to research on innovative approaches for the treatment and conditioning of metallic waste. Abdesselam Abdelouas, a researcher working on the project at IMT Atlantique, gives us an overview.

 

Can you describe the broader context for the PREDIS European project?

AA: The management of radioactive waste from the nuclear power cycle, as well as from other industries such as healthcare, radiopharmaceutical production, farming and mining operations, remains a challenge and requires the development of new methods, processes and technologies.

What is the project’s goal?

AA: The aim of PREDIS is to reduce the overall volume of waste destined for disposal and to recycle radioactively contaminated metallic waste. Reducing the volume of waste will make it possible to avoid building costly new disposal sites. The consortium will strive to test and assess innovative approaches  (methods, processes, technologies and demonstrators) for the treatment and conditioning of radioactive waste.

How do you plan to achieve this goal and what are the scientific hurdles to overcome?

AA: As part of this project, we’ll be selecting a well-known or new chemical process, improving it and adapting it for greater applicability. This process will also have to meet environmental requirements, in particular in regard to the toxicity of the materials used and the volume of effluents produced by the treatment.

How are IMT Atlantique researchers contributing to this project?

AA: Bernd Grambow and I are radiochemistry professors at IMT Atlantique’s Subatech laboratory, and we are coordinating Work Package 4 on metallic waste treatment. Beyond this coordination mission, we will be conducting research into decontamination and management of treatment effluents.

The PREDIS consortium brings together 48 partners. Which ones are you working with the most?

AA: In Work Package 4, we interact with some twenty mainly European partners, but we work more closely with the CEA (Marcoule), the University of Pannonia (Hungary) and the Czech Technical University (CTU).

What are the next big steps for the project?

AA: The PREDIS management team had been meeting on 16 June 2020 to prepare for the kick off meeting scheduled for September 2020.

Interview by Véronique Charlet for I’MTech

 

Photographie en terasse, tiers-lieux, sociabilité, third places

The end or beginning of third places?

After our homes and workplaces, the social environments in which we spend time are referred to as third places. These are places for gathering, adventures – but at the same time, of safety, security and control. In the following article Müge Özman[1], Mélissa Boudes[2], Cynthia Srnec (FESP-MGEN)[3], Nicolas Jullien[4] and Cédric Gossart[5], members of IMT’s INESS idea lab, explore our relationship with third places and the challenges and opportunities of digital technology.

 

“Space is a common symbol of freedom in the Western world. Space lies open; it suggests the future and invites action[…]. To be open and free is to be exposed and vulnerable […] Compared to space, place is a calm center of established values. Human beings require both space and place. Human lives are a dialectical movement between shelter and venture, attachment and freedom.”

Yi-Fu Tuan, Space and Place, 1980

Observing that humans clearly spent a lot of time in coffee shops, restaurants, libraries, bars and hair salons, Ray Oldenburg coined the expression “third places” to describe these places other than home or work. Although they are known by different names around the world, they always serve the same essential purpose of socialization, giving people a chance to take a break, exposing ourselves to open up to others, but in relative security.  Who would have thought that a virus would suddenly deprive billions of humans of these local gathering places by turning them into areas of vulnerability?

For many humans, the Covid-19 pandemic has led to an extension of their virtual spaces by contracting physical space to the ultimate shelter: home. From streams of video conferences to “Zoom cocktail parties,” digital technology has helped maintain a sense of continuity in social interactions. Including, at times, strengthening relationships within an apartment building or neighborhood and exploring new forms of work. It has also shown that for many meetings, virtuality is enough, saving thousands of tons of CO2 in the process. The world of tomorrow is first and foremost one of sustainable human activity, and for such uses at least, digital technology has proven to be effective.

But at the same time, digital technology has also raised concerns about accessibility and exposing oneself to risks. Exploring these new spaces with peace of mind requires opportunities for refuge and control safeguards. Driven by the algorithms of globalized platforms, digital technology shapes our way of life in terms of communication, information and consumption, without necessarily providing an opportunity to express our attachment to a “place” and our need to shape it.

Between third place and virtuality

Between infinitely large digital spaces and the intimacy of home, will there be nothing else in this “world of tomorrow”? How can systems of refuge spaces such as third places flourish once again and plant the seeds of greater resilience to external shocks? Must we give up digital technologies to save our cherished third places? The success with which the reopening of bars and restaurants has been met shows that this need for in-person socialization has not gone away. But when it comes to communicating face-to-face with individuals outside of our local area, must we choose between physical travel, which is harmful to the planet, and digital interactions, which, controlled by global companies, are outside of individuals’ control and therefore lead to a heightened sense of vulnerability?

Luckily, our choices are not limited to this binary alternative. Many solutions seek to combine the effectiveness of digital technology with the dynamics of local communities. Take FabLabs for example. With nearly 400 active Fablabs in France, these “fabrication laboratories” make digital technology available in a collaborative way to solve concrete problems. Most of them are small-scale production units open to the general public, with a high degree of flexibility in order to adapt their production processes to the needs of their local communities. From the beginning of the lockdown, the French FabLab Network reorganized its members local activities in order to produce and distribute masks and face shields. In particular, they made plans and manufacturing guidelines available through open access and provided technical and logistical advice.

Other local initiatives based on digital platforms controlled by their users have increased their activity to provide services, share resource and help increase resilience in local communities. One such example is Pwiic, a mutual aid platform for neighbors to help one another procure food and medication during the lockdown period. Or the Open Food Network, which supports the organization of local food systems, and Coopcycle for the delivery of online purchases. This platform helps bicycle delivery workers organize through associations or cooperatives to obtain more dignified working conditions than those of other better-known platforms. Restrictions on travel and gatherings imposed by lockdown orders may benefit fair travel platforms such as Les Oiseaux de Passage, which combine human connections and tourism.

The digital tools that create such platforms can also be developed in a local way and/or by their users. The free software movement has led the way, but there are also initiatives to produce and host free tools locally. This is the aim of the Collective of Alternative, Transparent, Open, Neutral, Solidary Hosting Web Hosts (or CHATONS), another example of an organization that has been very active both before and during the crisis we are experiencing.

Commons

Even though they are digitally-based, platforms can have significant tangible effects on people and places, for example, the rise in housing prices driven by tourist rentals. How, then, can we preserve the vitality of our local places without giving up the benefits of digital technology?

The first French Forum on Cooperative Platforms highlighted the importance of partnerships between local authorities and digital platforms in order to develop new business models that are sustainable from a social and ecological point of view. As illustrated in the April 2020 presentation by Plateformes en Communs – (Commons Platforms), the French network of cooperative platforms – to the European Commission, such partnerships can help make local territories more resilient and autonomous by sharing resources through inclusive governance.

These solutions proposed by citizens, social economy organizations and public players represent alternatives to technologies that tend to be monopolistic. Because they offer governance that is closer to local needs – and more importantly, shared – they invent new virtual third places. By pooling time, digital technologies, knowledge and a variety of other resources for the benefit of other citizens and organizing the collective management of these resources,  those behind such initiatives have opened the door to new digital “commons.”

Like physical commons, digital commons are based on co-management by a portion of the users of the (digital) resources so that as many people as possible may benefit. These commons, whether physical or digital, are always threatened by the breakdown of the collective and by competition from more appealing private solutions, at least in the short term. But the success of the initiatives we have cited, – and many others – their flexibility and resilience in this time of crisis, have proven their effectiveness.  They are viable, resilient solutions, and are probably more sustainable in terms of the diffusion, appropriation and control of technologies and the digital space.

What digital technology and these initiatives have shown us is that place is not necessarily physical. It is that which is close, familiar, which we can influence and shape. Even under lockdown, places were not erased from the horizon of human activity and continued to organize and host collective action, in particular in virtual third places.  The lockdown was primarily a period of exclusion of third places for leisure and the backdrop for locally-rooted social and digital innovation. The creation and renovation of commons – tangible, intangible and hybrid – initiated by cooperative platforms leads us to rethink the dimensions and potential of third places in the 21st century. But not what makes them so necessary: these are places for exploration, of course, but ones participants can control, places they can help shape and organize, in a word: govern.

[1] Müge Özman is a researcher at the Institut Mines-Télécom Business School.
[2] Mélissa Boudes is a researcher at the Institut Mines-Télécom Business School.
[3] Cynthia Srnec is a researcher at the MGEN Foundation for Public Health and an associate researcher at LITEM.
[4] Nicolas Jullien is a researcher at IMT Atlantique and a member of the GIS Marsouin Scientific Interest Group.
[5] Cédric Gossart is a researcher at the Institut Mines-Télécom Business School.

 

surgical masks

Testing the efficiency of protective masks

A Mines Saint-Étienne and Jean-Monnet University laboratory has been accredited to certify the bacterial filtration efficiency of surgical masks. Jérémie Pourchez, a researcher in healthcare engineering at Mines Saint-Étienne, describes this specific aspects of this expertise. He also explains why it is worth considering opening these tests up to the fabric masks worn by general public.

 

The Covid-19 pandemic has led to growing demand for surgical masks, and therefore a greater need for tests to assess this type of protective equipment. Since May 2020, a Mines Saint-Étienne and Jean-Monnet University laboratory¹ has been accredited by the French National Agency for Medicines and Health Products (ANSM) to certify the bacterial filtration efficiency of surgical masks.

The agency has specified “that no such facility is available in the country” and that it is therefore highly valuable in the context of the COVID 19 epidemic. Jérémie Pourchez, a researcher at Mines Saint-Étienne, adds that this expertise is also rare at the international level and that this accreditation is temporary. “We’re operational at the scientific level but under normal circumstances, this accreditation requires several months of additional inspection to ensure the COFRAC standards for the quality approach.” In other words: the laboratory environment.

Pathogen aerosols

Surgical masks are medical devices which must meet strict specifications regulated by a European standard(EN 14683). Three parameters must be verified to validate compliance with this standard: microbial cleanliness relating to a mask’s packaging and storage conditions, breathability, and bacterial filtration. The test bench developed by the laboratory is used to verify the latter parameter. “A surgical mask protects the environment from the wearer. It is usually used to protect the patient when the masked surgeon operates. So we try to measure the efficiency of the mask being worn toward the environment,” says Jérémie Pourchez.

“We place the surgical mask between a bioaerosol generator (which produces microdroplets of water measuring 3 micrometers containing a pathogenic bacterium, a Staphylococcus aureus) and a cascade impactor (which makes it possible to collect aerosols in petri dishes depending on their size),” explains the researcher. This allows the scientists to analyze which sizes of particles are not filtered by the mask. These dishes are then incubated at 37°C for at least 24 hours to determine whether they can make a culture. “It isn’t enough to simply show that the pathogen passes through the mask, we have to demonstrate that it is viable and cultivable to determine whether the pathogen that has passed through the mask could infect a host,” says the researcher.

Surprising findings for fabric masks

For the researchers, it is also important to perform these efficiency tests on fabric masks (masks for non-medical purposes) now intended for the general public. In the Loire department, many textile industries have started making fabric masks to help combat the pandemic, but until now these masks for the general public have not undergone bacterial filtration tests with a pathogen. “They don’t have to meet the same standards, but they must meet specification SPEC76 defined by AFNOR, and masks for the general public are divided into two major filtration categories, higher than 70% or 90%, whereas surgical masks are higher than 95% or 98%,” adds the Saint-Etienne researcher. Still, some manufacturers are interested in determining the efficiency of their fabric masks by having access to a test with pathogen aerosols.

“Out of the masks that we test here at the laboratory, 15 to 20 percent are fabric masks,” says Jérémie Pourchez, “and certain manufacturers make masks of excellent quality which, in terms of bacterial filtration, are almost equivalent to the least efficient surgical masks.”  The researcher stresses the potential benefits of these fabric masks if they are shown to have good bacterial filtration efficiency. As it stands today, surgical masks, which are made of plastic materials, are much more widely-used. Unfortunately, masks are often disposed of in nature, and this has significant environmental implications.

Reusable, washable masks with bacterial filtration efficiency almost equivalent to that of surgical masks would be beneficial in terms of sustainable development. “And as far as the washable, reusable aspect is concerned, it would be useful to determine methods for washing these masks in a more environmentally-friendly, convenient way than a long cycle at 60°C,” adds Jérémie Pourchez. “We’re working with colleagues from Jean-Monnet University to look for other solutions, and one of the solutions we are considering, for example, is using microwaves to decontaminate masks”. This approach could complement that of the international ReUse consortium, of which the Mines Saint-Étienne team is a member, along with a team from IMT Atlantique. The consortium is working on finding methods for decontaminating and reusing surgical masks.

¹ The laboratory corresponds to two joint research units (UMR), UMR INSERM U1059 Sainbiose and UMR EA 3064 GIMAP.

Tiphaine Claveau for I’MTech

Reducing the duration of mechanical ventilation with a statistical theory

A team of researchers from IMT Atlantique has developed an algorithm that can automatically detect anomalies in mechanical ventilation by using a new statistical theory. The goal is to improve synchronization between the patient and ventilator, thus reducing the duration of mechanical ventilation and consequently shortening hospital stays. This issue is especially crucial for hospitals under pressure due to numerous patients on respirators as a result of the Covid-19 pandemic.

 

Dominique Pastor never imagined that the new theoretical approach in statistics he was working on would be used to help doctors provide better care for patients on mechanical ventilation (MV). The researcher in statistics specializes in signal processing, specifically anomaly detection. His work usually focuses on processing radar signals or speech signals. It wasn’t until he met Erwan L’Her, head of emergencies at La Cavale Blanche Hospital in Brest, that he began focusing the application of his theory, called Random Distortion Testing, on mechanical ventilation. The doctor shared a little known problem with the researcher, which would become a source of inspiration: a mismatch that often exists between patients’ efforts while undergoing MV and the respirator’s output.

Signal anomalies with serious consequences

Respirators–or ventilators–feature a device enabling them to supply pressurized air when they recognize demand from the patient. In other words, the patient is the one to initiate a cycle. Many adjustable parameters are used to best respond to an individual’s specific needs, which change as the illness progresses. These include inspiratory flow rate and number of cycles per minute. Standard settings are used at the start of MV and then modified based on flow rate/ pressure curves–the famous signal processed by the Curvex algorithm, which resulted from collaboration between Dominique Pastor and Erwan L’Her.

Patient-ventilator asynchronies are defined as time lags between the patient’s inspiration and the ventilator’s flow rate. For example, the device cannot detect a patient’s demand for air because the trigger threshold level is set too high. This leads to ineffective inspiratory effort. It can also lead to double triggering when the ventilator generates two cycles for one patient inspiratory effort. The patient may also not have time to completely empty their lungs before the respirator begins a new cycle, leading to dynamic hyperinflation of the lungs, also known as intrinsic PEEP (positive end-expiratory pressure).

Effort inspiratoire inefficace : la demande du patient n’aboutit pas à une insufflation

Example of ineffective inspiratory effort: patient demand does not result in insufflation.

 

Double déclenchement : un seul effort inspiratoire aboutit à deux insufflations rapprochées

Example of double triggering: a single inspiratory effort results in two ventilator insufflations within a short time span.

 

PEP intrinsèque : l’insufflation suivante survient alors que le débit n’est pas nul à la fin de l’expiration

Example of positive end expiratory pressure: the next ventilator insufflation occurs before the flow has returned to zero at the end of expiration.

 

These patient-ventilator anomalies are believed to be very common in clinical practice. They have serious consequences, ranging from patient discomfort to increased respiratory efforts that can lead to invasive ventilation–intubation. They involve an increase in the duration of mechanical ventilation, with an increase in weaning failure (end of MV) and therefore longer hospital stays.

However, the number of patients in need of mechanical ventilation has skyrocketed with the Covid-19 pandemic, while the number of health care workers, respirators and beds has only moderately increased, which at times gives rise to difficult ethical choices. A reduction in the duration of ventilation would therefore be a significant advantage, both for the current situation and in general, since respiratory diseases are becoming increasingly common, especially with the aging of the population.

A statistical model that adapts to various signals

Patient-ventilator asynchronies result in visible anomalies in air flow rate and pressure curves. These curves model the series of inspiratory phases, when pressure increases and expiratory phases, when it decreases, with inversion of the air flow. Control monitors for most next-generation devices display these flow rate and pressure curves. The anomalies are visible to the naked eye, but this requires regular monitoring of the curves, and a doctor to be present who can adjust the ventilator settings. Dominique Pastor and Erwan L’Her had a common objective: develop an algorithm that would detect certain anomalies automatically. Their work was patented under the name Curvex in 2013.

The detection of an anomaly represents a major deviation from the usual form for a signal. We chose an approach called supervised learning by mathematical modeling,” Dominique Pastor explains. One characteristic of his Random Distorsion Testing theory is that it makes it possible to detect signal anomalies with very little prior knowledge. “Often, the signal to be processed is not well known, as in the case of MV, since each patient has unique characteristics, and it is difficult to obtain a large quantity of medical data. The usual statistical theories have difficulty taking into account a high degree of uncertainty in the signal. Our model, on the other hand, is generic and flexible enough to handle a wide range of situations.” 

Dominique Pastor first worked with intrinsic PEEP detection algorithms with PhD student Quang-Thang Nguyen, who helped to find solutions. “The algorithm is a flow rate signal segmentation method used to identify the various breathing phases and calculate models for detecting anomalies. We introduced an adjustable setting (tolerance) to define the deviation from the model used to determine whether it is an anomaly,” Dominique Pastor explains. According to the researcher from IMT Atlantique, this tolerance is a valuable asset. It can be adjusted by the user, based on their needs, to alter the sensitivity and specificity.

The Curvex platform not only processes flow data from ventilators, but also a wide range of physiological signals (electrocardiogram, electroencephalogram). A ventilation simulator was included, with settings that can be adjusted in real-time, in order to test the algorithms and perform demonstrations. By modifying certain pulmonary parameters (compliance, airway resistance, etc.) and background noise levels, different signal anomalies (intrinsic PEEP, ineffective inspiratory effort, etc.) appear randomly. The algorithm detects and characterizes them. “In terms of methodology, it is important to have statistical signals that we can control in order to make sure it is working and then move on to real signals,” Dominique Pastor explains.

The next step is to create a proof of concept (POC) by developing electronics to detect anomalies in ventilatory signals, to be installed in emergency and intensive care units and used by health care providers. The goal is to provide versatile equipment that could adapt to any ventilator. “The theory has been expanding since 2013, but unfortunately the project has made little progress from a technical perspective due to lack of funding.  We now hope that it will finally materialize, in partnership with a laboratory, or designers of ventilators, for example. I think this a valuable use of our algorithms, both from a scientific and medical perspective,” says Dominique Pastor.

By Sarah Balfagon for I’MTech.

Learn more:

– Mechanical ventilation system monitoring: automatic detection of dynamic hyperinflation and asynchrony. Quang-Thang Nguyen, Dominique Pastor, François Lellouche and Erwan L’Her

Illustration sources:

Curves 1 and 2

Curve 3

 

Capture d'écran des cartes du Tarn pour visualiser l'épidémie de Covid-19, crisis management

Covid-19 crisis management maps

The prefecture of the Tarn department worked with a research team from IMT Mines Albi to meet their needs in managing the Covid-19 crisis. Frédérick Benaben, an industrial engineering researcher, explains the tool they developed to help local stakeholders visualize the necessary information and facilitate their decision-making.

 

The Covid-19 crisis is original and new, because it is above all an information crisis,” says Frédérick Benaben, a researcher in information system interoperability at IMT Mines Albi. Usually, crisis management involves complex organization to get different stakeholders to work together. This has not been the case in the current health crisis. The difficulty here lies in obtaining information: it is important to know who is sick, where the sick people are and where the resources are. The algorithmic crisis management tools that Frédérick Benaben’s team have been working on are thus incompatible with current needs.

When we were contacted by the Tarn prefecture to provide them with a crisis management tool, we had to start almost from scratch,” says the researcher. This crisis is not so complex in its management that it requires the help of artificial intelligence, but it is so widespread that it is difficult to display all the information at once. The researchers therefore worked on using a tool that ensures both the demographic visualization of the territory and the optimization of volunteer workers’ routes.

The Tarn team was able to make this tool available quickly and thus save a considerable amount of time for stakeholders in the territory. The success of this project also lies in the cohesion at the territorial level between a research establishment and local stakeholders, reacting quickly and effectively to an unprecedented crisis. The prefecture wanted to work on maps to visualize the needs and resources of the department, and that is what Frédérick Benaben and his colleagues, Aurélie Montarnal, Julien Lesbegueries and Guillaume Martin provided them with.

Visualizing the department

The first requirement was to be able to visualize the needs of the municipalities in the department. It was then necessary to identify the people most at risk of being affected by the disease. Researchers drew on INSEE’s public data to pool information such as age or population density. “The aim was to divide the territory into municipalities and cantons in order to diagnose fragility on a local scale,” explains Frédérick Benaben. For example, there are greater risks for municipalities whose residents are mostly over 65 years of age.

The researchers therefore created a map of the department with several layers that can be activated to visualize the different information. One showing the fragility of the municipalities, another indicating the resilience of the territory – based, for example, on the number of volunteers. By identifying themselves on the prefecture’s website, these people volunteer to go shopping for others, or simply to keep in touch or check on residents. “We can then see the relationship between the number of people at risk and the number of volunteers in a town, to see if the town has sufficient resources to respond,” says the researcher.

Some towns with a lot of volunteers appear mostly in green, those with a lack of volunteers are very red. “This gives us a representation of the Tarn as a sort of paving with red and green tiles, the aim being to create a uniform color by associating the surplus volunteers with those municipalities which need them” specifies Frédérick Benaben.

This territorial visualization tool offers a simple and clear view to local stakeholders to diagnose the needs of their towns. With this information in hand it is easier for them to make decisions to prepare or react. “If a territory is red, we know that the situation will be difficult when the virus hits,” says the researcher. The prefecture can then allocate resources for one of these territories, for example by requisitioning premises if there is no emergency center in the vicinity. It may also include decisions on communication, such as a call for volunteers.

Optimizing routes

This dynamic map is continuously updated with new data, such as the registration of new volunteers. “There is a very contemplative aspect and a more dynamic aspect that optimizes the routes of volunteers,” says Frédérick Benaben. There are many parameters to be taken into account when deciding on routes and this can be a real headache for the employees of the prefecture. Moreover, these volunteer routes must also be designed to limit the spread of the epidemic.

The needs of people who are ill or at risk must be matched with the skills of the volunteers. Some residents ask for help with errands or gardening, but others also need medical care or help with personal hygiene that requires special skills. It is also necessary to take into account the ability of volunteers to travel, whether by vehicle, bicycle or on foot. With regard to Covid-19, it is also essential to limit contact and reduce the perimeter of the routes as much as possible.

With this information, we can develop an algorithm to optimize each volunteer’s routes,” says the researcher. This is of course personal data to which the researchers do not have access. They have tested the algorithm with fictitious values to ensure functionality when the prefecture enters the real data.

The interest of this mapping solution lies in the possibilities for development,” says Frédérick Benaben. Depending on the available data, new visualization layers can be added. “Currently we have little or no data on those who are contaminated or at risk of dangerous contamination and remain at home. If we had this data we could add a new layer of visualization and provide additional support for decision making. We can configure as many layers of visualizations as we want.

 Tiphaine Claveau for I’MTech

Gaia-X

Gaia-X: a sovereign, interoperable European cloud network

France and Germany have unveiled the Gaia-X project, which aims to harmonize cloud services in Europe to facilitate data sharing between different parties. It also seeks to reduce companies’ dependence on cloud service providers, which are largely American. For Europe, this project is therefore an opportunity to regain sovereignty over its data.

 

When a company chooses a cloud service provider, it’s a little bit like when you accept terms of service or sale: you never really know how you’ll be able to change your service or how much it will cost.” Anne-Sophie Taillandier uses this analogy to illustrate the challenges companies currently face in relying on cloud service providers. As director of IMT’s TeraLab platform specializing in data analysis and AI, she is contributing to the European Gaia-X project, which aims to introduce transparency and interoperability in cloud services in Europe.

Initiated by German Economy Minister Peter Altmaier, Gaia-X currently brings together ten German founding members and ten French founding members, including cloud service providers and major users of these services, of all sizes and from all industries. Along with these companies, a handful of academic players specialized in research in digital science and technology – including IMT – are also taking part in the project. This public-private consortium is seeking to develop two types of standards to harmonize European cloud services.

First of all, it aims to introduce technical standards to harmonize practices among various players. This is an important condition to facilitate data and software portability. Each company must be able to decide to switch service providers if it so wishes, without having to modify its databases to make them compatible with a new service. The standardization of the technical framework for every cloud service is a key driver to facilitate the movement of data between European parties.

Environmental issues illustrate the significance of this technical problem. “In order to measure the environmental impact of a company’s operations, its data must be combined with that of its providers, and possibly, its customers,” explains Anne-Sophie Taillandier, who, for a number of years, has been leading research at TeraLab into the issues of data transparency and portability. “If each party’s data is hosted on a different service, with its own storage and processing architecture, they will first have to go through a lengthy process in order to harmonize the data spaces.”  This step is currently a barrier for organizations that lack either financial resources or skills, such as small companies and public organizations.

Also read on I’MTech: Data sharing: an important issue for the agricultural sector

In addition to technical standards, the members of the Gaia-X partnership are also seeking to develop a regulatory and ethical framework for cloud service stakeholders in Europe. The goal is to bring clarity to contractual relationships between service providers and customers. “SMEs don’t have the same legal and technical teams as large companies,” says Anne-Sophie Taillandier. “When they enter into an agreement with a cloud service provider, they don’t have the resources to evaluate all the subtleties of the contract.”

The consortium has already begun to work on these ethical rules.  For example, there must not be any hidden costs when a company wishes to remove its data from a service provider and switch to another provider. Ultimately, this part of the project should give companies the power to choose their cloud service providers in a transparent way. An approach that recalls the GDPR, which gives citizens the ability to choose their digital services with greater transparency and to ensure the portability of their personal data when necessary.

Restoring European digital sovereignty

It is no coincidence that the concepts guiding the Gaia-X project evoke those of the GDPR. Gaia-X is rooted in a general European Union trend for data sovereignty.  The initiative is also an integral part of the long-term EU strategy to create a sovereign space for industrial and personal data, protected by technical and legal mechanisms, which are also sovereign.

The Cloud Act adopted by the United States in 2018 gave rise to concerns among European  stakeholders. This federal law gives local and national law enforcement authorities the power to request access to data stored by American companies, should this data be necessary to a criminal investigation, including when these companies’ servers are located outside the United States. Yet, the cloud services market is dominated by American players. Together, Amazon, Microsoft and Google have over half the market share for this industry. For European companies, the Cloud Act poses a risk to the sovereignty of their data.

Even so, the project does not aim to create a new European cloud services leader, but rather to encourage the development of existing players, through its regulations and standards, while harmonizing the practices already in place among the various players. The goal is not to prevent American players or those in other countries — Chinese giant Alibaba’s cloud service is increasingly gaining ground ­— from tapping into the European market. “Our goal is to issue standards that respect European values, and then tell anyone who wishes to enter the European market that they may, as long as they play by the rules.”

For now, Gaia-X has adopted an associative structure. In the months ahead, the consortium should be opening up to incorporate other European companies who want to take part. “The project was originally a Franco-German initiative,” says Anne-Sophie Taillandier, “but it is meant to open up to include other European players who wish to contribute.” In line with European efforts over recent years to develop digital technology with a focus on cybersecurity and artificial intelligence, Gaia-X and its vision for a European cloud will rely on joint creation.

 

Benjamin Vignard for I’MTech