Audio and machine learning: Gaël Richard’s award-winning project

Gaël Richard, a researcher in Information Processing at Télécom Paris, has been awarded an Advanced Grant from the European Research Council (ERC) for his project entitled HI-Audio. This initiative aims to develop hybrid approaches that combine signal processing with deep machine learning for the purpose of understanding and analyzing sound.

Artificial intelligence now relies heavily on deep neural networks, which have a major shortcoming: they require very large databases for learning,” says Gaël Richard, a researcher in Information Processing at Télécom Paris. He believes that “using signal models, or physical sound propagation models, in a deep learning algorithm would reduce the amount of data needed for learning while still allowing for the high controllability of the algorithm.” Gaël Richard plans to pursue this breakthrough via his HI-Audio* project, which won an ERC Advanced Grant on April 26, 2022

For example, the integration of physical sound propagation models can improve the characterization and configuration of the types of sound analyzed and help to develop an automatic sound recognition system. “The applications for the methods developed in this project focus on the analysis of music signals and the recognition of sound scenes, which is the identification of the recording’s sound environment (outside, inside, airport) and all the sound sources present,” Gaël Richard explains.

Industrial applications

Learning sound scenes could help autonomous cars identify their surroundings. The algorithm would be able to identify the surrounding sounds using microphones. The vehicle would be able to recognize the sound of a siren and its variations in sound intensity. Autonomous cars would then be able to change lanes to let an ambulance or fire engine pass, without having to “see” it in the detection cameras. The processes developed in the HI-Audio project could be applied to many other areas. The algorithms could be used in predictive maintenance to control the quality of parts in a production line. A car part, such as a bumper, is typically controlled based on the sound resonance generated when a non-destructive impact is applied.

The other key applications for the HI-Audio project are in the field of AI for music, particularly to assist musical creation by developing new interpretable methods for sound synthesis and transformation.

Machine learning and music

One of the goals of this project is to build a database of music recordings from a wide variety of styles and different cultures,” Gaël Richard explains. “This database, which will be automatically annotated (with precise semantic information), will expand the research to include less studied or less distributed music, especially from audio streaming platforms,” he says. One of the challenges of this project is that of developing algorithms capable of recognizing the words and phrases spoken by the performers, retranscribing the music regardless of its recording location, and contributing new musical transformation capabilities (style transfer, rhythmic transformation, word changes).

One important aspect of the project will also be the separation of sound sources,” Gaël Richard says. In an audio file, the separation of sources, which in the case of music are each linked to a different instrument, is generally achieved via filtering or “masking”. The idea is to hide all other sources until only the target source remains. One less common approach is to isolate the instrument via sound synthesis. This involves analyzing the music to characterize the sound source to be extracted in order to reproduce it. For Gaël Richard, “the advantage is that, in principle, artifacts from other sources are entirely absentIn addition, the synthesized source can be controlled by a few interpretable parameters, such as the fundamental frequency, which is directly related to the sound’s perceived pitch,” he says. “This type of approach opens up tremendous opportunities for sound manipulation and transformation, with real potential for developing new tools to assist music creation,” says Gaël Richard.

*HI-Audio will start on October 1st, 2022 and will be funded by the ERC Advanced Grant for five years for a total amount of €2.48 million.

Rémy Fauvel

Tatouage des données de santé, health data

Encrypting and watermarking health data to protect it

As medicine and genetics make increasing use of data science and AI, the question of how to protect this sensitive information is becoming increasingly important to all those involved in health. A team from the LaTIM laboratory is working on these issues, with solutions such as encryption and watermarking. It has just been accredited by Inserm.

The original version of this article has been published on the website of IMT Atlantique

Securing medical data

Securing medical data, preventing it from being misused for commercial or malicious purposes, from being distorted or even destroyed has become a major challenge for both health players and public authorities. This is particularly relevant at a time when progress in medicine (and genetics) is increasingly based on the use of huge quantities of data, particularly with the rise of artificial intelligence. Several recent incidents (cyber-attacks, data leaks, etc.) have highlighted the urgent need to act against this type of risk. The issue also concerns each and every one of us: no one wants their medical information to be accessible to everyone.

Health data, which is particularly sensitive, can be sold at a higher price than bank data,” points out Gouenou Coatrieux, a teacher-researcher at LaTIM (the Medical Information Processing Laboratory, shared by IMT Atlantique, the University of Western Brittany (UBO) and Inserm), who is working on this subject in conjunction with Brest University Hospital. To enable this data to be shared while also limiting the risks, LaTIM are usnig two techniques: secure computing and watermarking.

Secure computing, which combines a set of cryptographic techniques for distributed computing along with other approaches, ensures confidentiality: the externalized data is coded in such a way that it is possible to continue to perform calculations on it. The research organisation that receives the data – be it a public laboratory or private company – can study it, but doesn’t have access to its initial version, which it cannot reconstruct. They therefore remain protected.

a

Gouenou Coatrieux, teacher-researcher at LaTIM
(Laboratoire de traitement de l’information médicale, common to IMT Atlantique, Université de Bretagne occidentale (UBO) and Inserm

Discreet but effective tattooing

Tattooing involves introducing a minor and imperceptible modification into medical images or data entrusted to a third party. “We simply modify a few pixels on an image, for example to change the colour a little, a subtle change that makes it possible to code a message,” explains Gouenou Coatrieux. We can thus tattoo the identifier of the last person to access the data. This method does not prevent the file from being used, but if a problem occurs, it makes it very easy to identify the person who leaked it. The tattoo thus guarantees traceability. It also creates a form of dissuasion, because users are informed of this device. This technique has long been used to combat digital video piracy. Encryption and tattooing can also be combined: this is called crypto-tattooing.

Initially, LaTIM team was interested in the protection of medical images. A joint laboratory was thus created with Medecom, a Breton company specialising in this field, which produces software dedicated to radiology.

Multiple fields of application

Subsequently, LaTIM extended its field of research to the entire field of cyber-health. This work has led to the filing of several patents. A former doctoral student and engineer from the school has also founded a company, WaToo, specialising in data tagging. A Cyber Health team at LaTIM, the first in this field, has just been accredited by Inserm. This multidisciplinary team includes researchers, research engineers, doctoral students and post-docs, and includes several fields of application: protection of medical images and genetic data, and ‘big data’ in health. In particular, it works on the databases used for AI and deep learning, and on the security of treatments that use AI. “For all these subjects, we need to be in constant contact with health and genetics specialists,” stresses Gouenou Coatrieux, head of the new entity. We also take into account standards in the field such as DICOM, the international standard for medical imaging, and legal issues such as those relating to privacy rights with the application of European RGPD regulations.

The Cyber Health team recently contributed to a project called PrivGen, selected by the Labex (laboratory of excellence) CominLabs. The ongoing work which started with PrivGen aims to identify markers of certain diseases in a secure manner, by comparing the genomes of patients with those of healthy people, and to analyse some of the patients’ genomes. But the volumes of data and the computing power required to analyse them are so large that they have to be shared and taken out of their original information systems and sent to supercomputers. “This data sharing creates an additional risk of leakage or disclosure,” warns the researcher. “PrivGen’s partners are currently working to find a technical solution to secure the treatments, in particular to prevent patient identification”.

Towards the launch of a chaire (French research consortium)

An industrial chaire called Cybaile, dedicated to cybersecurity for trusted artificial intelligence in health, will also be launched next fall. LaTIM will partner with three other organizations: Thales group, Sophia Genetics and the start-up Aiintense, a specialist in neuroscience data. With the support of Inserm, and with the backing of the Regional Council of Brittany, it will focus in particular on securing the learning of AI models in health, in order to help with decision-making – screening, diagnoses, and treatment advice. “If we have a large amount of data, and therefore representations of the disease, we can use AI to detect signs of anomalies and set up decision support systems,” says Gouenou Coatrieux. “In ophthalmology, for example, we rely on a large quantity of images of the back of the eye to identify or detect pathologies and treat them better.

Nuclear energy: outsourcing issues

Since the end of the 20th century, the practice of outsourcing has increased in France. This phenomenon has included strategic sectors, such as the nuclear power industry. Stéphanie Tillement, a researcher in Sociology at IMT Atlantique, has worked on the relationship between safety and subcontracting in the nuclear industry.

Since the 1970s, we have witnessed an increase in outsourcing in many industrial sectors, particularly for maintenance activities,” says Stéphanie Tillement, a researcher in Sociology at IMT Atlantique. In the book Contracting and Safety, she and other French and international researchers offer a balanced, open-minded analysis of the practice of subcontracting. The book first addresses the nuclear power context. “We wanted to show the diversity of the relationships that exist between nuclear power operators and subcontractors, and in the links between subcontracting and safety,” says Stéphanie Tillement.

Contrary to popular belief, the term “subcontracting” does not refer to a uniform reality: subcontracting situations vary in terms of the size of the provider company and the duration of the service provider’s presence on-site, for example,” she says. In addition to cases of “nuclear nomads,” who are often associated with subcontracting in the nuclear industry, some subcontracted staff have been working for years, even decades, at the same site, for the same contracting party. While the so-called nuclear nomads perform ad hoc interventions, which cause some to denounce forms of job insecurity, this is not the case for all external providers.  The working conditions and social interactions between the contracting authority and service provider therefore vary significantly depending on the type of subcontracting.

High-risk occupations

Outsourcing in the nuclear industry and its effects on the safety and security of the facilities and workers has received increased attention in both the political sphere (with the “Pompili” parliamentary committee in 2018) and academia. Annie Thébaud-Mony, the honorary research director of the Inserm Scientific Interest Group on Occupational Cancers demonstrated that “at French nuclear sites, employees of subcontracting companies were exposed to 80% of the collective ionizing radiation dose during maintenance activities,” Stéphanie Tillement says. In other words, subcontracted employees are more exposed to ionizing radiation than others. 

This is linked more to the nature of the outsourced tasks, which are often dangerous because they require intervention in high-risk areas, than it is to the type of protection used or follow-up with subcontracted employees.  In addition, the operators of typical nuclear facilities —such as nuclear reactors or radioactive waste treatment plants— are legally responsible for the safety of their facilities under the terms of the law of June 13, 2006 on transparency and safety in the nuclear sector. This also applies to outsourced activities. In the event of an incident or accident, the operator still remains responsible.

One of the major questions posed by the use of subcontracting is that of the monitoring of activities performed by external service providers. In order to ensure that the tasks are carried out in accordance with safety requirements, the operator is required to monitor subcontracted staff. True supervision by contracting authorities implies that they have maintained their industrial technical mastery of the outsourced activities and have allocated the necessary resources (time, human resources) to this supervision. “A major issue for monitoring pertains to the skills of the person performing the monitoring: if they do not master the technique, there is a risk that the monitoring will be reduced to formal checks without take into account the reality of the activity,” the sociologist explains. In the case of specialty subcontracting, this issue is all the more important since operators hire subcontracted staff who have specific skills which they do not have in-house. 

Complex relationships

In the nuclear sector, one example of specific skills that are both scarce and highly sought after are those of welders, whose role is fundamental in maintaining the safety of the equipment. Their work requires a high level of expertise. In the case of specialty subcontracting, the balance of power can therefore be in favor of the service providers, since the operator is dependent on them. They can therefore negotiate more favorable contracts (with less pressure on costs and deadlines, for example).

Outsourcing poses a more general problem related to the fragmentation of work and organizations, which is more complex due to the multiple interfaces and interdependencies to be managed,” said Stéphanie Tillage. “We often see that companies that choose to outsource part of their activities are primarily concerned with short-term gain,” she explains. “In doing so, they omit an entire series of hidden long-term costs, including the need for the contracting authority to restructure the internal organization in order to ensure the long-term coordination and monitoring of the activities,” the scientist explains. This restructuring can be costly and require significant training in order to ensure the safety and security of workers in the long-term.

Rémy Fauvel

Stronger 3D prints

3D printing is a manufacturing process used for both consumer and industrial applications in the aeronautics, automotive, rail and medical industries. The Shoryuken project being developed at IMT Nord Europe aims to improve the mechanical performance of the objects printed using plastic and composite materials. To accomplish this, it combines 3D printing with laser welding technology.

In the industrial world, certain parts of cars, trains, airplanes, prostheses and orthoses are now manufactured using 3D printing. This manufacturing method enables the small-scale production of customized, geometrically complex parts using 3D digital models, without requiring expensive, specifically designed molds. This procedure saves time and materials when producing prototypes and products to be marketed. However, 3D printing has its limits, especially in relation to structural composite materials, which are plastic materials reinforced with fibers with a high level of resistance and rigidity.

3D printing processes that use composites with yarn containing cut reinforcing fibers generally produce materials with relatively weak mechanical properties. In order to improve the mechanical performance of the printed parts, manufacturing processes using yarns reinforced with continuous fibers are now in high demand in the industry. These yarns are made of thermoplastics, heat-sensitive materials, and continuous carbon or glass yarns. During the printing process, the yarns are melted in order to add the materials they contain layer by layer. The carbon fibers contained in the thermoplastic yarns do not melt and provide the object with solidity and resistance.  

However, the required level of resistance and rigidity can only be obtained in the direction of the fibers, since they are all positioned on a single printing plane. “The current composite 3D printing technology does not allow for the production of parts containing continuous fibrous reinforcements oriented in all the directions desired in space. This is a disadvantage when there are mechanical constraints in three dimensions,” says André Chateau Akue Asseko, researcher in Materials Science at IMT Nord Europe and winner of the Young Researchers call for projects by the French National Research Agency (ANR).

Hybridization of innovative technologies

This is precisely the technological barrier that the new Shoryuken* project seeks to overcome. To accomplish this, the initiative is studying the pairing of 3D printing with laser welding. This combination makes it possible to print two or more components for the same composite part in different printing directions and then use laser welding to assemble them.

The difficulty stems from the presence of fibers or porosity, which disrupt the laser beam path due to the heterogeneity, which introduces thermal and optical diffusion phenomena,” the scientist explains. This assembly process therefore requires that the small areas filled with thermoplastics be treated differently during 3D printing. The laser radiation melts the thermoplastic polymer in a targeted manner with the composite material surrounding it. Once they are welded together, the two components become inseparable. This makes it possible to produce objects containing reinforcing fibers positioned in ways that allow them to resist mechanical loads in different directions.

Virtual engineering to optimize production

Modeling and simulation tools integrating multiphysics coupling are being developed to optimize these innovative design and production processes. These tools therefore contain information on the interaction between the laser and materials and their thermal and mechanical behavior. “Virtual engineering makes it possible to define the optimal assembly conditions that will ensure the quality of the welding interface,” says André Chateau Akue Asseko. The software, populated with information on the materials of interest, such as melting points, is used to simulate the behavior of two materials that are welded together in order to prevent spending too much time and materials on 3D printing tests.

The user can therefore adjust the laser parameters in order to conduct an optimal weld right away. “These simulations allows us to identify the optimal temperature and speed ranges for welding,” the researcher explains. The development of this type of tool would allow companies to reduce their development and industrialization costs before production by avoiding potential assembly problems. This would ensure the mechanical performance of the manufactured goods.

Read more on I’MTech: 3D printing, a revolution for the construction industry?

Multisectoral applications

 “For this project, we chose to focus on the health sector by producing a prosthetic arm as a demonstrator,” says the scientist, who is currently in contact with companies specialized in prosthesis design. André Chateau Akue Asseko explains that he initially chose to prioritize this sector for pragmatic reasons. “There is strong demand in this field for customized items, adapted to the users’ morphology. The parts are reasonably sized and compatible with the capabilities of our experimental equipment,” the researcher says.

The Shoryuken project will end in 2026. By that time, the future process and digital tool could convince other industries, such as the rail and automotive sectors, of the benefits of customizing parts and tailoring their functionalization for small and medium-scale production runs. For transportation companies, the significantly lighter weights of the parts designed and produced help to cut down on fuel consumption and thereby reduce carbon emissions, which are a key concern in the current global environmental context.

Rémy Fauvel

The ANR JCJC SHORYUKEN project on the “Assembly of Hybrid Thermoplastic and Thermosetting Carbon Composite: Customization of Complex Structures” is funded by the French National Research Agency (ANR) as part of the 2021 Generic Call for Proposals (AAPG 2021 – CE10) on “Industry and Factories of the Future: People, Organization, Technology.”

new heroism

New Heroism: a paradoxical model of success

Today, the ideals of success cover multiple areas of society, such as work, cinema, and personal life. In his book Le Nouvel Héroïsme, Olivier Fournout, a sociologist and semiologist at Télécom Paris, analyzes the conditions that have allowed the emergence of a model of success that is riddled with paradoxes.

A hero is a someone capable of bravery; of feats that reveal extraordinary courage. That is the dictionary definition, in any case. Another dictionary entry defines a hero as an individual worthy of public esteem, for their strength of character, their genius, and total dedication to their work or cause. In terms of fiction, it relates to mythology; to legendary characters who accomplish great feats. The term can also refer to literary, dramatic and cinematographic works.

According to Olivier Fournout, a researcher in Sociology at Télécom Paris, the modern approach to the hero intersects all these definitions. In our society, a hero can be Arnaud Beltrame, the policeman who saved a hostage and defended republican values. At the singer’s funeral, Emmanuel Macron proclaimed that Johhny Hallyday was also a hero – a star who conveyed an imaginary of rebellion and freedom. It was Emmanuel Macron who then declared: “We must return to political heroism” in an interview in August 2017. “Right now, on the roads of France,” reports Olivier Fournout, “there are Carrefour delivery trucks with the slogan, ‘Thank you, heroes’ written on the side, and a photo of the supermarket’s employees”. For the sociologist, “the common use of the word hero to refer to such different people calls our most contemporary modernity into question”.

The matrix of heroism

The heroic model can be found in a myriad of paradoxical orders which seem appropriate to the present time, and are found in extremely diverse and heterogeneous fields. The whole imaginary of the tragic hero is found in paradoxes that abound today on a multitude of levels. According to the discourses and images in broad circulation, in order to be a hero today, one has to be both “with others and against others, respect the frameworks and shatter them, and to be good, both on the outside and in one’s inner dimension,” argues Olivier Fournout, based on numerous pieces of evidence. Individuals are pushed to strive for this ideal either by myths, or by examples with real people like bosses and artists.

The difficulty lies in having to be empathetic while also being in competition. The researcher illustrates this in his book Le Nouvel Héroïsme with a Nike advertisement that portrays a young hockey player who knocks over garbage cans in the street, slams doors in people’s faces, and destroys walls by hitting pucks at them. Yet he also carries a disabled person up the stairs. Here we see both violence and a concern for others in everyday life. “This must be seen both as a notion of empowerment, that can be positive for individuals, and an endangerment. This duality that characterizes the complexity of the matrix of heroism is what I analyze in my book,” explains Olivier Fournout.

“The pressure on individuals to succeed and to constantly surpass themselves can give rise to psychosocial risks such as burnout or depression,” says the sociologist. To strive for this heroic model that is presented as an ideal, a person can overwork themself. The difficulty in managing paradoxes like cooperation and competition with those in one’s milieu can lead an individual to endure psychological or cognitive stress. The discourse of surpassing oneself creates difficulties for people. Furthermore, the pressure weighing on each person is accompanied by a call for training or self-training, with the promise of an “increase in skills of self-expression, of creativity, and of management of social relations,” Olivier Fournout believes.  

To describe the matrix of heroism, which he also calls the “matrix of paradoxical injunctions”, the sociologist used more than 200 treaties on management and personal growth, advertisements, news articles portraying bosses, and a corpus of 500 Hollywood-style movies. The goal was to show the common structure of these extremely diverse fields. “Even though the word hero comes from cinema, I have seen it used by professors and consultants in the United States to illustrate management theories,” says the researcher.

Establishing an imaginary

In his book, Olivier Fournout indicates that the establishment of a dominant imaginary in our media spaces must first be incarnated into as wide a range of characters as possible. In the case of new heroism, this could be Arnaud Beltrame or Johnny Hallyday, but could also be representatives of Generation Z or the Start-up Nation, activists, or even a Sunday mountain biker. This imaginary must then be placed in a game of distorting mirrors in very heterogeneous universes, such as the world of work, individuals’ privacy, and great Hollywood myths. Thirdly, the matrix must be stabilized in the dominant editorial forms. In the end, the imaginary must pass through ‘portrait galleries’, i.e. role models conveyed in the press or in the world of management. These could be soccer players, artists, big bosses, or everyday heroes.   

Olivier Fournout uses a theatrical metaphor to articulate this. He speaks of scenes and counter-scenes to illustrate the succession of public and private moments, of great, exceptional feats, and heroism for everyone in everyday life. He thus highlights its heterogeneity, which forms part of the foundation of the heroic model. The sociologist uses the example of Shakespeare’s theater, which, in its historical plays, invites the spectator to observe the great official parades of power and to take a look behind the scenes. Some scenes portray the grand speeches of the future King Henry V, while others draw the spectator into the life of this Prince who, before becoming King, lived in taverns with thieves. “What I call counter-scenes are the gray areas, the sequences that are less official than those that take place in the spotlight,” says the researcher.

Applied to the professional world, counter-scenes refer to the personal investment in one’s work, everything related to, for example, passion, sweat, or emotions such as fear in the face of risks or changes. The scenes, on the other hand, portray the performance in social roles with a control over the outward signals that one conveys. “Another metaphor that can illustrate this heterogeneity of the scenes and counter-scenes is that of forging and counter-forging. When blacksmiths forge, they strike the metals to shape them, but they also hold back their blows at times to regain momentum, which they call counter-forging,” says Olivier Fournout.

A model that covers different spheres

 “In my corpus, there are management books written by professors from Harvard and MIT (Massachusetts Institute of Technology). These universities have a great power of dissemination that facilitates the propagation of an imaginary such as that of new heroism,” says the researcher. These universities also have a porosity with the world of consultants who participate in the writing of bestsellers in this field.

But universities and businesses are not the only environments covered by the heroic model. During the Covid-19 pandemic, Camille Étienne, an environmental activist, made a clip in which she referred to citizens as ‘heroes in pyjamas’, regarding the reduction in pollution. The matrix of success is highly malleable and is able to prepare for the world of tomorrow. This power of metamorphosis has been theorized by sociologists Ève Chiapello and Luc Boltanski in their work Le Nouvel Esprit du Capitalisme (The New Spirit of Capitalism). The strength of capitalism is to incorporate criticism in order to remain in a state of constant innovation. This could also apply to the model of new heroism. “Among the paradoxical orders of the modern hero is the lesson to follow rules and to break them. A bestselling management book advises: ‘Firstly, break all the rules’ – but of course, when you look closely, it is not all the rules. The art of the hero is there, hanging in a precarious balance, which can border on the tragic in times of crisis,” concludes Olivier Fournout.

Rémy Fauvel

BeePMN, abeilles, apiculteur

BeePMN: Monitoring bees to take better care of them

At the crossroads between Industry 4.0 and the Internet of Things, the BeePMN research project aims to help amateur and professional beekeepers. It will feature an intuitive smartphone app that combines business processes with real-time measurements of apiaries. 

When a swarm of bees becomes too crowded for its hive, the queen stops laying eggs and the worker bees leave in search of a place to start a new colony. The hive splits into two groups; those who follow the queen to explore new horizons, and those who stay and choose a new queen to take over the leadership of the colony. As exciting as this new adventure is for the bees, for the beekeeper who maintains the hive, this new beginning brings complications. In particular, the loss of part of the colony also leads to a decrease in honey production. On the other hand, the loss of the bees can be caused by something much worse, like the emergence of a virus or an invasion that threatens the health of the bee colony. 

Beekeepers therefore monitor these events in the life of the bees very closely, but keeping track of the hives on a daily basis is a major problem, and a question of time. The BeePMN project, at the crossroads between the processes of Industry 4.0 and the Internet of Things, wants to give the beekeepers eyes in the back of their heads to be able to monitor the health of their hives in real time. BeePMN combines a non-invasive sensor system, to provide real-time data, with an intuitive and easy-to-use application, to provide decision-making support. 

This project was launched as part of the Hubert Curien Partnerships which support scientific and technological exchanges between countries, offering the installation of sites both in France, near Alès, and in Lebanon, with the beekeeping cooperative Atelier du Miel. It is supported by a collaboration between a team led by Gregory Zacharewicz, Nicolas Daclin and François Trousset at IMT Mines Alès, a team led by Charles Yaacoub and Adib Akl at the Holy Spirit University of Kaslik in Lebanon, and the company ConnectHive. This company, which specializes in engineering as applied to the beekeeping industry, was founded by François Pfister, a retired IMT Mines Alès researcher and beekeeping enthusiast.

BeePMN has several goals: to monitor the health of the hives, to increase honey production, and to facilitate the sharing of knowledge between amateurs and professionals. 

“I actually work on business process problems in industry,” says Grégory Zacharewicz, a researcher at IMT Mines Alès on the project. “But the synergy with these different partners has directed us more towards the craft sector, and specifically beekeeping,” with the aim of providing tools to accelerate their tasks or reminders about certain activities. “I often compare BeePMN to a GPS: it is of course possible to drive without it, but it’s a tool that guides the driver to optimize his choices,” he explains. 

Making better decisions 

The different sites, both in France and Lebanon, are equipped with connected sensors, non-invasive for the bee colonies, which gather real-time data on their health, as well as on humidity, temperature, and weight. For the latter, they have developed ‘nomad’ scales, which are less expensive than the usual fixed equivalent. This data is then recorded in an application to help guide the beekeepers in their daily choices. Though professionals are used to making these kinds of decisions, they may not necessarily have all the information at hand, nor the time to monitor all their apiaries. 

The data observed by the sensors is paired with other environmental information such as the current season, weather conditions, and the flowering period. This allows for precise information on each hive and its environment, and improves the relevance of possible actions and choices. 

“If, for example, we observe a sudden 60% weight loss in a hive, there is no other option than to harvest it,” says Charbel Kady, a PhD student at IMT Mines Alès who is also working on the BeePMN project. On the other hand, if the weight loss happens gradually over the course of the week, that might be the result of lots of other factors, like a virus attacking the colony, or part of the colony moving elsewhere. That is the whole point of combining this essential data, like weight, with environmental variables, to provide more certainty on the cause of an event. “It’s about making sense of the information to identify the cause,” notes Charbel Kady. 

The researchers would also like to add vegetation maps to the environmental information. This is an important aspect, especially with regard to honey plants, but this information is difficult to find for certain regions, and complex to install in an application. The project also aims to progress towards prevention aspects: a PhD student, Marianne El Kassis, joined the BeePMN team to work on simulations and to integrate them into the application, to be able to prevent potential risks. 

Learn through play 

The two researchers stressed that one of the points of the application is for beekeepers to help each other. “Beekeepers can share information with each other, and the interesting model of one colleague can be copied and integrated into the everyday life of another,” says Charbel Kady. The application centralizes the data for a set of apiaries and the beekeepers can share their results with each other, or make them available to beginners. That’s the core of the second part of the project, a ‘serious’ game to offer a simplified and fun version to amateur beekeepers who are less independent. 

Professionals are accustomed to repeating a certain set of actions, so it is possible to formalize them with digital tools in the form of business processes to guide amateurs in their activities. “We organized several meetings with beekeepers to define these business rules and to integrate them into the application, and when the sensors receive the information, it triggers certain actions or alerts, for example taking care of the honey harvest, or needing to add wax to the hive,” explains Grégory Zacharewicz. 

“There is a strong aspect of knowledge and skill transfer. We can imagine it like a sort of companionship to pass on the experience acquired,” says the researcher. The GPS analogy is applicable here too: “It makes available a whole range of past choices from professionals and other users, so that when you encounter a particular situation, it suggests the best response based on what has been decided by other users in the past,” the researcher adds. The concept of the app is very similar, in offering the possibility to capitalize on professionals’ knowledge of business processes to educate yourself and learn, while being guided at the same time. 

The BeePMN project is based on beekeeping activities, but as the researchers point out, the concept itself can be applied to various fields. “We can think of a lot of human and industrial activities where this project could be replicated to support decision-making processes and make them stronger,” explains Grégory Zacharewicz.

Tiphaine Claveau

métavers, metaverse

What is the metaverse?

Although it is only in the prototype stage, the metaverse is already making quite a name for itself. This term, which comes straight out of a science fiction novel from the 1990s, now describes the concept of a connected virtual world, heralded as the future of the Internet. So what’s hiding on the other side of the metaverse? Guillaume Moreau, a Virtual Reality researcher at IMT Atlantique, explains.

How can we define the metaverse?

Guillaume Moreau: The metaverse offers an immersive and interactive experience in a virtual and connected world. Immersion is achieved through the use of technical devices, mainly Virtual Reality headsets, which allow you to feel present in an artificial world. This world can be imaginary, or a more or less faithful copy of reality, depending on whether we’re talking about an adventure video game or the reproduction of a museum, for example. The other key aspect is interaction. The user is a participant, so when they do something, the world around them immediately reacts.

The metaverse is not a revolution, but a democratization of Virtual Reality. Its novelty lies in the commitment of stakeholders like Meta, aka Facebook – a major investor in the concept – to turn experiences that were previously solitary or for small groups only into, massive, multi-user experiences – in other words, to simultaneously interconnect a large number of people in three-dimensional virtual worlds, and to monetize the whole concept. This raises questions of IT infrastructure, uses, ethics, and health.

What are its intended uses?

GM: Meta wants to move all internet services into the metaverse. This is not realistic, because there will be, for example, no point in buying a train ticket in a virtual world. On the other hand, I think there will be not one, but many metaverses, depending on different uses.

One potential use is video games, which are already massively multi-user, but also virtual tourism, concerts, sports events, and e-commerce. A professional use allowing face-to-face meetings is also being considered. What the metaverse will bring to these experiences remains an open question, and there are sure to be many failures out of thousands of attempts. I am sure that we will see the emergence of meaningful uses that we have not yet thought of.

In any case, the metaverse will raise challenges of interoperability, i.e. the possibility of moving seamlessly from one universe to another. This will require the establishment of standards that do not yet exist and that should, as is often the case, be enforced by the largest players on the market.

What technological advances have made the development of these metaverses possible today?

GM: There have been notable material advances in graphics cards that offer significant display capabilities, and Virtual Reality headsets have reached a resolution equivalent to the limits of human eyesight. Combining these two technologies results in a wonderful contradiction.

On the one hand, the headsets work on a compromise; they must offer the largest possible field of view whilst still remaining light, small and energy self-sufficient. On the other hand, graphics cards are heat sinks. Therefore, in order to ensure the battery life of the headsets, the calculations behind the metaverse display have to be done on remote server farms before the images can be transferred. That’s where the 5G networks come in, whose potential for new applications, like the metaverse, is yet to be explored.

Could the metaverse support the development of new technologies that would increase immersion and interactivity?

GM: One way to increase the action of the user is to set them in motion. There is an interesting research topic on the development of multidirectional treadmills. This is a much more complicated problem than it seems, and it only takes the horizontal plane into account – so no slopes, steps, etc.

Otherwise, immersion is mainly achieved through sensory integration, i.e. our ability to feel all our senses at the same time and to detect inconsistencies. Currently, immersion systems only stimulate sight and hearing, but another sense that would be of interest in the metaverse is touch.

However, there are a number of challenges associated with so-called ‘haptic’ devices. Firstly, complex computer calculations must be performed to detect a user’s actions to the nearest millisecond, so that they can be felt without the feedback seeming strange and delayed. Secondly, there are technological challenges. The fantasy of an exoskeleton that responds strongly, quickly, and safely in a virtual world will never work. Beyond a certain level of power, robots must be kept in cages for safety reasons. Furthermore, we currently only know how to do force feedback on one point of the body – not yet on the whole thing.

Does that mean it is not possible to stimulate senses other than sight and hearing?

GM: Ultra-realism is not inevitable; it is possible to cheat and trick the brain by using sensory substitution, i.e. by mixing a little haptics with visual effects. By modifying the visual stimulus, it is possible to make haptic stimuli appear more diverse than they actually are. There is a lot of research to be done on this subject. As far as the other senses are concerned, we don’t know how to do very much. This is not a major problem for a typical audience, but it calls into question the accessibility of virtual worlds for people with disabilities.

One of the questions raised by the metaverse is its health impact. What effects might it have on our health?

GM: We know already that the effects of screens on our health are not insignificant. In 2021, the French National Agency for Food, Environmental and Occupational Health & Safety (ANSES) published a report specifically targeting the health impact of Virtual Reality, which is a crucial part of the metaverse. The prevalence of visual disorders and the risk of Virtual Reality Sickness – a simulation sickness that affects many people – will therefore be sure consequences of exposure to the metaverse.

We also know that virtual worlds can be used to influence people’s behavior. Currently, this has a positive goal and is being used for therapeutic purposes, including the treatment of certain phobias. However, it would be utopian to think that the opposite is not possible. For ethical and logical reasons, we cannot conduct research aiming to demonstrate that the technology can be used to cause harm. It will therefore be the uses that dictate the potentially harmful psychological impact of the metaverse.

Will the metaverses be used to capture more user data?

GM: Yes, that much is obvious. The owners and operators of the metaverse will be able to retrieve information on the direction of your gaze in the headset, or on the distance you have traveled, for example. It is difficult to say how this data will be used at the moment. However, the metaverse is going to make its use more widespread. Currently, each website has data on us, but this information is not linked together. In the metaverse, all this data will be grouped together to form even richer user profiles. This is the other side of the coin, i.e. the exploitation and monetization side. Moreover, given that the business model of an application like Facebook is based on the sale of targeted advertising, the virtual environment that the company wants to develop will certainly feed into a new advertising revolution.

What is missing to make the metaverse a reality?

GM: Technically, all the ingredients are there except perhaps the equipment for individuals. A Virtual Reality headset costs between €300 and €600 – an investment that is not accessible to everyone. There is, however, a plateau in technical improvement that could lower prices. In any case, this is a crucial element in the viability of the metaverse, which, let us not forget, is supposed to be a massively multi-user experience.

Anaïs Culot

CEM, champs électro-magnétiques, EMF, electromagnetic fields

How can we assess the health risks associated with exposure to electromagnetic fields?

As partners of the European SEAWave project, Télécom Paris and the C2M Chair are developing innovative measurement techniques to respond to public concern about the possible effects of cell phone usage. Funded by the EU to the tune of €8 million, the project will be launched in June 2022 for a period of 3 years. Interview with Joe Wiart, holder of the C2M Chair (Modeling, Characterization and Control of Electromagnetic Wave Exposure).

Could you remind us of the context in which the call for projects ‘Health and Exposure to Electromagnetic Fields (EMF)’ of the Horizon Europe program was launched?

Joe Wiart – The exponential use of wireless communication devices, throughout Europe, comes with a perceived risk associated with electromagnetic radiation, despite the existing protection thresholds (Recommendation 1999/519/CE and Directive 2013/35/UE). With the rollout of 5G, these concerns have multiplied. The Horizon Europe program will help to address these questions and concerns, and will study the possible impacts on specific populations, such as children and workers. It will intensify studies on millimeter-wave frequencies and investigate compliance analysis methods in these frequency ranges. The program will look at the evolution of electromagnetic exposure, as well as the contribution of exposure levels induced by 5G and new variable beam antennas. It will also investigate tools to better assess risks, communicate, and respond to concerns.

What is the challenge of SEAWave, one of the four selected projects, of which Télécom Paris is a partner?

JW – Currently, there is a lot of work, such as that of the ICNIRP (International Commission on Non-Ionizing Radiation Protection), that has been done to assess the compliance of radio-frequency equipment with protection thresholds. This work is largely based on conservative methods or models. SEAWave will contribute to these approaches in exposure to millimeter waves (with in vivo and in vitro studies). These approaches, by design, take the worst-case scenarios and overestimate the exposure. Yet, for a better control of possible impacts, as in epidemiological studies, and without underestimating conservative approaches, it is necessary to assess actual exposure. The work carried out by SEAWave will focus on establishing potentially new patterns of use, estimating associated exposure levels, and comparing them to existing patterns. Using innovative technology, the activities will focus on monitoring not only the general population, but also specific risk groups, such as children and workers.

What scientific contribution have Télécom Paris researchers made to this project that includes eleven Work Packages (WP)?

JW – The C2M Chair at Télécom Paris is involved in the work of four interdependent WPs, and is responsible for WP1 on EMF exposure in the context of the rollout of 5G. Among the eleven WPs, four are dedicated to millimeter waves and biomedical studies, and four others are dedicated to monitoring the exposure levels induced by 5G. The last three are dedicated to project management, but also to tools for risk assessment and communication. The researchers at Télécom Paris will mainly be taking part in the four WPs dedicated to monitoring the exposure levels induced by 5G. They will draw on measurement campaigns in Europe, networks of connected sensors, tools from artificial neural networks and, more generally, methods from Artificial Intelligence.

What are the scientific obstacles that need to be overcome?

JW – For a long time, assessing and monitoring exposure levels has been based on deterministic methods. With the increasing complexity of networks, like 5G, but also with the versatility of uses, these methods have reached their limits. It is necessary to develop new approaches based on the study of time series, statistical methods, and Artificial Intelligence tools applied to the dosimetry of radio frequency fields. Télécom Paris has been working in this field for many years; this expertise will be essential in overcoming the scientific obstacles that SEAWave will face.

The SEAWave consortium has around 15 partners. Who are they and what are your collaborations?

JW – These partners fall into three broad categories. The first is related to engineering: in addition to Télécom Paris, there is, for example, the Aristotle University of Thessaloniki (Greece), the Agenzia Nazionale per le Nuove Tecnologie, l’Energia e lo Sviluppo Economico Sostenibile (Italy), Schmid & Partner Engineering AG (Switzerland), the Foundation for Research on Information Technologies in Society (IT’IS, Switzerland), the Interuniversity Microelectronics Centre (IMEC, Belgium), and the CEA (France). The second category concerns biomedical aspects, with partners such as the IU Internationale Hochschule (Germany), Lausanne University Hospital (Switzerland), and the Fraunhofer-Institut für Toxikologie und Experimentelle Medizin (Germany). The last category is dedicated to risk management. It includes the International Agency for Research on Cancer (IARC, France), the Bundesamt für Strahlenschutz (Germany) and the French National Frequency Agency (ANFR, France).

We will mainly collaborate with partners such as the Aristotle University of Thessaloniki, the CEA, the IT’IS Foundation and the IMEC, but also with the IARC and the ANFR.

The project will end in 2025. In the long run, what are the expected results?

JW – First of all, tools to better control the risk and better assess the exposure levels induced by current and future wireless communication networks. All the measurements that will have been carried out will provide a good characterization of the exposure for specific populations (e.g. children, workers) and will lay the foundations for a European map of radio frequency exposure.

Interview by Véronique Charlet

cryptographie, nombres aléatoires, random numbers

Cryptography: what are the random numbers for?

Hervé Debar, Télécom SudParis – Institut Mines-Télécom and Olivier Levillain, Télécom SudParis – Institut Mines-Télécom

The original purpose of cryptography is to allow two parties (traditionally referred to as Alice and Bob) to exchange messages without another party (traditionally known as Eve) being able to read them. Alice and Bob will therefore agree on a method to exchange each message, M, in an encrypted form, C. Eve can observe the medium through which the encrypted message (or ciphertext) C is sent, but she cannot retrieve the information exchanged without knowing the necessary secret information, called the key.

This is a very old exercise, since we speak, for example, of the ‘Julius Caesar Cipher’. However, it has become very important in recent years, due to the increasing need to exchange information. Cryptography has therefore become an essential part of our everyday lives. Besides the exchange of messages, cryptographic mechanisms are used in many everyday objects to identify and authenticate users and their transactions. We find these mechanisms in phones, for example, to encrypt and authenticate communication between the telephone and radio antennas, or in car keys, and bank cards.

The internet has also popularized the ‘padlock’ in browsers to indicate that the communication between the browser and the server are protected by cryptographic mechanisms. To function correctly, these mechanisms require the use of random numbers, the quality (or more precisely, the unpredictability) thereof contributes to the security of the protocols.

Cryptographic algorithms

To transform a message M into an encrypted message C, by means of an algorithm A, keys are used. In so-called symmetric algorithms, we speak of secret keys (Ks), which are shared and kept secret by Alice and Bob. In symmetric algorithms, there are public (KPu) and private (KPr) key pairs. For each user, KPu is known to all, whereas KPr must be kept safe by its owner. Algorithm A is also public, which means that the secrecy of communication relies solely on the secrecy of the keys (secret or private).

Sometimes, the message M being transmitted is not important in itself, and the purpose of encrypting said message M is only to verify that the correspondent can decrypt it. This proof of possession of Ks or KPr can be used in some authentication schemes. In this case, it is important never to use the same message M more than once, since this would allow Eve to find out information pertaining to the keys. Therefore, it is necessary to generate a random message NA, which will change each time that Alice and Bob want to communicate.

The best known and probably most widely used example of this mechanism is the Diffie-Helman algorithm.  This algorithm allows a browser (Alice) and a website (Bob) to obtain an identical secret key K, different for each connection, by having exchanged their respective KPu beforehand. This process is performed, for example, when connecting to a retail website. This allows the browser and the website to exchange encrypted messages with a key that is destroyed at the end of each session. This means that there is no need to keep it (allowing for ease of use and security, since there is less chance of losing the key). It also means that not much traffic will be encrypted with the same key, which makes cryptanalysis attacks more difficult than if the same key were always used.

Generating random numbers

To ensure Eve is unable obtain the secret key, it is very important that she cannot guess the message NA. In practice, this message is often a large random number used in the calculations required by the chosen algorithm.

Initially, generating random variables was used for a lot of simulation work. To obtain relevant results, it is important not to repeat the simulation with the same parameters, but to repeat the simulation with different parameters hundreds or even thousands of times. The aim is to generate numbers that respect certain statistical properties, and that do not allow the sequence of numbers to be differentiated from a sequence that would be obtained by rolling dice, for example.

To generate a random number NA that can be used in these simulations, so-called pseudo-random generators are normally used, which apply a reprocessing algorithm to an initial value, known as the ‘seed’.  These pseudo-random generators aim to produce a sequence of numbers that resembles a random sequence, according to these statistical criteria. However, using the same seed twice will result in obtaining the same sequence twice.

The pseudo-random generator algorithm is usually public. If an attacker is able to guess the seed, he will be able to generate the random sequence and thus obtain the random numbers used by the cryptographic algorithms. In the specific case of cryptography, the attacker does not necessarily even need to know the exact value of the seed. If they are able to guess a set of values, this is enough to quickly calculate all possible keys and to crack the encryption.

In the 2000s, programmers used seeds that could be easily guessed, that were based on time, for example, making systems vulnerable. Since then, to avoid being able to guess the seed (or a set of values for the seed), operating systems rely on a mixture of the physical elements of the system (e.g. processing temperature, bus connections, etc.). These physical elements are impossible for an attacker to observe, and vary frequently, and therefore provide a good seed source for pseudo-random generators.

What about vulnerabilities?

Although the field is now well understood, random number generators are still sometimes subject to vulnerabilities. For example, between 2017 and 2021, cybersecurity researchers found 53 such vulnerabilities (CWE-338). This represents only a small number of software flaws (less than 1 in 1000). Several of these flaws, however, are of a high or critical level, meaning they can be used quite easily by attackers and are widespread.

A prime example in 2010 was Sony’s error on the PS3 software signature system. In this case, the reuse of a random variable for two different signatures allowed an attacker to find the manufacturer’s private key: it then became possible to install any software on the console, including pirated software and malware.

Between 2017 and 2021, flaws have also affected physical components, such as Intel Xeon processors, Broadcom chips used for communications and Qualcom SnapDragon processors embedded in mobile phones. These flaws affect the quality of random number generation.  For example, CVE-2018-5871 and CVE-2018-11290 relate to a seed generator whose periodicity is too short, i.e. that repeats the same sequence of seeds quickly. These flaws have been fixed and only affect certain functions of the hardware, which limits the risk.

The quality of random number generation is therefore a security issue. Operating systems running on newer processors (less than 10 years old) have random number generation mechanisms that are hardware-based. This generally ensures a good quality of the latter and thus the proper functioning of cryptographic algorithms, even if occasional vulnerabilities may arise. On the other hand, the difficulty is especially prominent in the case of connected objects, whose hardware capacities do not allow the implementation of random generators as powerful as those available on computers and smartphones, and which often prove to be more vulnerable.

Hervé Debar, Director of Research and Doctoral Training, Deputy Director, Télécom SudParis – Institut Mines-Télécom and Olivier Levillain, Assistant Professor, Télécom SudParis – Institut Mines-Télécom

This article has been republished from The Conversation under a Creative Commons license. Read the original article.

MP4 for Streaming

Streaming services are now part of our everyday life, and it’s all thanks to MP4. This computer standard allows videos to be played online and on various devices. Jean-Claude Dufourd and Jean Le Feuvre, researchers in Computer Science at Télécom Paris, have been recognized by the Emmy Awards Academy for their work on this computer format amongst other things.

In 2021 the File Format IT working group of the MPEG Committee received an Emmy Award for its work in developing ISOBMFF. Behind this term lies a computer format that was used as the basis for the development of MP4, the famous video standard we have all encountered when saving a file in the ‘.mp4’ format. “The Emmy’s decision to give an award to the File Format group is justified; this file format has had a great impact on the world of video by creating a whole ecosystem that brings together very different types of research,” explains Jean-Claude Dufourd, a computer scientist at Télécom Paris and a member of the File Format group.

MP4, which can capture sound and also video, “is used for live or on-demand media broadcasting, but not for the real-time broadcasting needed to stream games or video conferences,” explains Jean Le Feuvre, also a computer scientist at Télécom Paris and member of the File Format group. There are several features of this format that have contributed to its success, including the ability to capture long videos like movies, while still remaining very compact.

The smaller the file size, the easier they are to circulate on networks. The compactness of MP4 is therefore an advantage for streaming movies and series.  Another explanation for its success is its adaptability to different types of devices. “This technology can be used on a wide variety of everyday devices such as telephones, computers, and televisions,” explains Jean-Claude Dufourd. The reason that MP4 is playable on different devices is because “the HTTP file distribution protocol has been reused to distribute video,” says the researcher.

Improving streaming quality

The HTTP (Hypertext Transfer Protocol), which has been prevalent since the 1990s, is typically used to create websites. Researchers have modified this protocol so that it can be used to broadcast video files online. Their studies led to the development of HTTP streaming, and then to an improved version called DASH (Dynamic Adaptive Streaming over HTTP), a protocol that “cuts up the information in the MP4 file into chunks of a few seconds each,” says Jean-Claude Dufourd. The segments obtained at the end of this process are successfully retrieved by the player to reconstruct the movie or the episode of the series being watched.

This cutting process allows the playback of the video file to be adjusted according to the connection speed. “For each time range, different quality encoding is provided, and the media player is responsible for deciding which quality is best for its conditions of use,” explains Jean Le Feuvre. Typically, if a viewer’s connection speed is low, the streaming player will select the video file with the least amount of data in order to facilitate traffic. The player will therefore select the lowest streaming quality. This feature allows content to continue playing on the platform with minimal risk of interruption.

In order to achieve this ability to adapt to different usage scenarios, tests have been carried out by scientists and manufacturers. “Tests were conducted to determine the network profile of a phone and a computer,” explains Jean-Claude Dufourd. “The results showed that the profiles were very different depending on the device and the situation, so the content is not delivered with the same fluidity,” he adds.

Economic interests

“Today, we are benefiting from 15 years of technological refinement that have allowed us to make the algorithms efficient enough to stream videos,” says Jean-Claude Dufourd. Since the beginning of streaming, one of the goals has been to broadcast videos with the best possible quality, while also reducing loading lag and putting as little strain on the network capacity as possible.

The challenge is primarily economic; the more strain that streaming platforms put on network capacity to stream their content, the more they have to pay. Currently, people are studying how to reduce the broadcaster’s internet bill. One solution would be to circulate video files mainly among users, thereby creating a less centralized streaming system. This is what file sharing systems allow between users (P2P or Peer-to-Peer networks). This alternative is currently being considered by streaming companies, as it would reduce the cost of broadcasting content.  

Rémy Fauvel