new heroism

New Heroism: a paradoxical model of success

Today, the ideals of success cover multiple areas of society, such as work, cinema, and personal life. In his book Le Nouvel Héroïsme, Olivier Fournout, a sociologist and semiologist at Télécom Paris, analyzes the conditions that have allowed the emergence of a model of success that is riddled with paradoxes.

A hero is a someone capable of bravery; of feats that reveal extraordinary courage. That is the dictionary definition, in any case. Another dictionary entry defines a hero as an individual worthy of public esteem, for their strength of character, their genius, and total dedication to their work or cause. In terms of fiction, it relates to mythology; to legendary characters who accomplish great feats. The term can also refer to literary, dramatic and cinematographic works.

According to Olivier Fournout, a researcher in Sociology at Télécom Paris, the modern approach to the hero intersects all these definitions. In our society, a hero can be Arnaud Beltrame, the policeman who saved a hostage and defended republican values. At the singer’s funeral, Emmanuel Macron proclaimed that Johhny Hallyday was also a hero – a star who conveyed an imaginary of rebellion and freedom. It was Emmanuel Macron who then declared: “We must return to political heroism” in an interview in August 2017. “Right now, on the roads of France,” reports Olivier Fournout, “there are Carrefour delivery trucks with the slogan, ‘Thank you, heroes’ written on the side, and a photo of the supermarket’s employees”. For the sociologist, “the common use of the word hero to refer to such different people calls our most contemporary modernity into question”.

The matrix of heroism

The heroic model can be found in a myriad of paradoxical orders which seem appropriate to the present time, and are found in extremely diverse and heterogeneous fields. The whole imaginary of the tragic hero is found in paradoxes that abound today on a multitude of levels. According to the discourses and images in broad circulation, in order to be a hero today, one has to be both “with others and against others, respect the frameworks and shatter them, and to be good, both on the outside and in one’s inner dimension,” argues Olivier Fournout, based on numerous pieces of evidence. Individuals are pushed to strive for this ideal either by myths, or by examples with real people like bosses and artists.

The difficulty lies in having to be empathetic while also being in competition. The researcher illustrates this in his book Le Nouvel Héroïsme with a Nike advertisement that portrays a young hockey player who knocks over garbage cans in the street, slams doors in people’s faces, and destroys walls by hitting pucks at them. Yet he also carries a disabled person up the stairs. Here we see both violence and a concern for others in everyday life. “This must be seen both as a notion of empowerment, that can be positive for individuals, and an endangerment. This duality that characterizes the complexity of the matrix of heroism is what I analyze in my book,” explains Olivier Fournout.

“The pressure on individuals to succeed and to constantly surpass themselves can give rise to psychosocial risks such as burnout or depression,” says the sociologist. To strive for this heroic model that is presented as an ideal, a person can overwork themself. The difficulty in managing paradoxes like cooperation and competition with those in one’s milieu can lead an individual to endure psychological or cognitive stress. The discourse of surpassing oneself creates difficulties for people. Furthermore, the pressure weighing on each person is accompanied by a call for training or self-training, with the promise of an “increase in skills of self-expression, of creativity, and of management of social relations,” Olivier Fournout believes.  

To describe the matrix of heroism, which he also calls the “matrix of paradoxical injunctions”, the sociologist used more than 200 treaties on management and personal growth, advertisements, news articles portraying bosses, and a corpus of 500 Hollywood-style movies. The goal was to show the common structure of these extremely diverse fields. “Even though the word hero comes from cinema, I have seen it used by professors and consultants in the United States to illustrate management theories,” says the researcher.

Establishing an imaginary

In his book, Olivier Fournout indicates that the establishment of a dominant imaginary in our media spaces must first be incarnated into as wide a range of characters as possible. In the case of new heroism, this could be Arnaud Beltrame or Johnny Hallyday, but could also be representatives of Generation Z or the Start-up Nation, activists, or even a Sunday mountain biker. This imaginary must then be placed in a game of distorting mirrors in very heterogeneous universes, such as the world of work, individuals’ privacy, and great Hollywood myths. Thirdly, the matrix must be stabilized in the dominant editorial forms. In the end, the imaginary must pass through ‘portrait galleries’, i.e. role models conveyed in the press or in the world of management. These could be soccer players, artists, big bosses, or everyday heroes.   

Olivier Fournout uses a theatrical metaphor to articulate this. He speaks of scenes and counter-scenes to illustrate the succession of public and private moments, of great, exceptional feats, and heroism for everyone in everyday life. He thus highlights its heterogeneity, which forms part of the foundation of the heroic model. The sociologist uses the example of Shakespeare’s theater, which, in its historical plays, invites the spectator to observe the great official parades of power and to take a look behind the scenes. Some scenes portray the grand speeches of the future King Henry V, while others draw the spectator into the life of this Prince who, before becoming King, lived in taverns with thieves. “What I call counter-scenes are the gray areas, the sequences that are less official than those that take place in the spotlight,” says the researcher.

Applied to the professional world, counter-scenes refer to the personal investment in one’s work, everything related to, for example, passion, sweat, or emotions such as fear in the face of risks or changes. The scenes, on the other hand, portray the performance in social roles with a control over the outward signals that one conveys. “Another metaphor that can illustrate this heterogeneity of the scenes and counter-scenes is that of forging and counter-forging. When blacksmiths forge, they strike the metals to shape them, but they also hold back their blows at times to regain momentum, which they call counter-forging,” says Olivier Fournout.

A model that covers different spheres

 “In my corpus, there are management books written by professors from Harvard and MIT (Massachusetts Institute of Technology). These universities have a great power of dissemination that facilitates the propagation of an imaginary such as that of new heroism,” says the researcher. These universities also have a porosity with the world of consultants who participate in the writing of bestsellers in this field.

But universities and businesses are not the only environments covered by the heroic model. During the Covid-19 pandemic, Camille Étienne, an environmental activist, made a clip in which she referred to citizens as ‘heroes in pyjamas’, regarding the reduction in pollution. The matrix of success is highly malleable and is able to prepare for the world of tomorrow. This power of metamorphosis has been theorized by sociologists Ève Chiapello and Luc Boltanski in their work Le Nouvel Esprit du Capitalisme (The New Spirit of Capitalism). The strength of capitalism is to incorporate criticism in order to remain in a state of constant innovation. This could also apply to the model of new heroism. “Among the paradoxical orders of the modern hero is the lesson to follow rules and to break them. A bestselling management book advises: ‘Firstly, break all the rules’ – but of course, when you look closely, it is not all the rules. The art of the hero is there, hanging in a precarious balance, which can border on the tragic in times of crisis,” concludes Olivier Fournout.

Rémy Fauvel

Gaël Richard

Télécom Paris | Audio signal processing, Audio analysis, Music information retrieval, Machine listening

[toggle title=”find all his articles on I’MTech” state=”open”]

[/toggle]

BeePMN, abeilles, apiculteur

BeePMN: Monitoring bees to take better care of them

At the crossroads between Industry 4.0 and the Internet of Things, the BeePMN research project aims to help amateur and professional beekeepers. It will feature an intuitive smartphone app that combines business processes with real-time measurements of apiaries. 

When a swarm of bees becomes too crowded for its hive, the queen stops laying eggs and the worker bees leave in search of a place to start a new colony. The hive splits into two groups; those who follow the queen to explore new horizons, and those who stay and choose a new queen to take over the leadership of the colony. As exciting as this new adventure is for the bees, for the beekeeper who maintains the hive, this new beginning brings complications. In particular, the loss of part of the colony also leads to a decrease in honey production. On the other hand, the loss of the bees can be caused by something much worse, like the emergence of a virus or an invasion that threatens the health of the bee colony. 

Beekeepers therefore monitor these events in the life of the bees very closely, but keeping track of the hives on a daily basis is a major problem, and a question of time. The BeePMN project, at the crossroads between the processes of Industry 4.0 and the Internet of Things, wants to give the beekeepers eyes in the back of their heads to be able to monitor the health of their hives in real time. BeePMN combines a non-invasive sensor system, to provide real-time data, with an intuitive and easy-to-use application, to provide decision-making support. 

This project was launched as part of the Hubert Curien Partnerships which support scientific and technological exchanges between countries, offering the installation of sites both in France, near Alès, and in Lebanon, with the beekeeping cooperative Atelier du Miel. It is supported by a collaboration between a team led by Gregory Zacharewicz, Nicolas Daclin and François Trousset at IMT Mines Alès, a team led by Charles Yaacoub and Adib Akl at the Holy Spirit University of Kaslik in Lebanon, and the company ConnectHive. This company, which specializes in engineering as applied to the beekeeping industry, was founded by François Pfister, a retired IMT Mines Alès researcher and beekeeping enthusiast.

BeePMN has several goals: to monitor the health of the hives, to increase honey production, and to facilitate the sharing of knowledge between amateurs and professionals. 

“I actually work on business process problems in industry,” says Grégory Zacharewicz, a researcher at IMT Mines Alès on the project. “But the synergy with these different partners has directed us more towards the craft sector, and specifically beekeeping,” with the aim of providing tools to accelerate their tasks or reminders about certain activities. “I often compare BeePMN to a GPS: it is of course possible to drive without it, but it’s a tool that guides the driver to optimize his choices,” he explains. 

Making better decisions 

The different sites, both in France and Lebanon, are equipped with connected sensors, non-invasive for the bee colonies, which gather real-time data on their health, as well as on humidity, temperature, and weight. For the latter, they have developed ‘nomad’ scales, which are less expensive than the usual fixed equivalent. This data is then recorded in an application to help guide the beekeepers in their daily choices. Though professionals are used to making these kinds of decisions, they may not necessarily have all the information at hand, nor the time to monitor all their apiaries. 

The data observed by the sensors is paired with other environmental information such as the current season, weather conditions, and the flowering period. This allows for precise information on each hive and its environment, and improves the relevance of possible actions and choices. 

“If, for example, we observe a sudden 60% weight loss in a hive, there is no other option than to harvest it,” says Charbel Kady, a PhD student at IMT Mines Alès who is also working on the BeePMN project. On the other hand, if the weight loss happens gradually over the course of the week, that might be the result of lots of other factors, like a virus attacking the colony, or part of the colony moving elsewhere. That is the whole point of combining this essential data, like weight, with environmental variables, to provide more certainty on the cause of an event. “It’s about making sense of the information to identify the cause,” notes Charbel Kady. 

The researchers would also like to add vegetation maps to the environmental information. This is an important aspect, especially with regard to honey plants, but this information is difficult to find for certain regions, and complex to install in an application. The project also aims to progress towards prevention aspects: a PhD student, Marianne El Kassis, joined the BeePMN team to work on simulations and to integrate them into the application, to be able to prevent potential risks. 

Learn through play 

The two researchers stressed that one of the points of the application is for beekeepers to help each other. “Beekeepers can share information with each other, and the interesting model of one colleague can be copied and integrated into the everyday life of another,” says Charbel Kady. The application centralizes the data for a set of apiaries and the beekeepers can share their results with each other, or make them available to beginners. That’s the core of the second part of the project, a ‘serious’ game to offer a simplified and fun version to amateur beekeepers who are less independent. 

Professionals are accustomed to repeating a certain set of actions, so it is possible to formalize them with digital tools in the form of business processes to guide amateurs in their activities. “We organized several meetings with beekeepers to define these business rules and to integrate them into the application, and when the sensors receive the information, it triggers certain actions or alerts, for example taking care of the honey harvest, or needing to add wax to the hive,” explains Grégory Zacharewicz. 

“There is a strong aspect of knowledge and skill transfer. We can imagine it like a sort of companionship to pass on the experience acquired,” says the researcher. The GPS analogy is applicable here too: “It makes available a whole range of past choices from professionals and other users, so that when you encounter a particular situation, it suggests the best response based on what has been decided by other users in the past,” the researcher adds. The concept of the app is very similar, in offering the possibility to capitalize on professionals’ knowledge of business processes to educate yourself and learn, while being guided at the same time. 

The BeePMN project is based on beekeeping activities, but as the researchers point out, the concept itself can be applied to various fields. “We can think of a lot of human and industrial activities where this project could be replicated to support decision-making processes and make them stronger,” explains Grégory Zacharewicz.

Tiphaine Claveau

métavers, metaverse

What is the metaverse?

Although it is only in the prototype stage, the metaverse is already making quite a name for itself. This term, which comes straight out of a science fiction novel from the 1990s, now describes the concept of a connected virtual world, heralded as the future of the Internet. So what’s hiding on the other side of the metaverse? Guillaume Moreau, a Virtual Reality researcher at IMT Atlantique, explains.

How can we define the metaverse?

Guillaume Moreau: The metaverse offers an immersive and interactive experience in a virtual and connected world. Immersion is achieved through the use of technical devices, mainly Virtual Reality headsets, which allow you to feel present in an artificial world. This world can be imaginary, or a more or less faithful copy of reality, depending on whether we’re talking about an adventure video game or the reproduction of a museum, for example. The other key aspect is interaction. The user is a participant, so when they do something, the world around them immediately reacts.

The metaverse is not a revolution, but a democratization of Virtual Reality. Its novelty lies in the commitment of stakeholders like Meta, aka Facebook – a major investor in the concept – to turn experiences that were previously solitary or for small groups only into, massive, multi-user experiences – in other words, to simultaneously interconnect a large number of people in three-dimensional virtual worlds, and to monetize the whole concept. This raises questions of IT infrastructure, uses, ethics, and health.

What are its intended uses?

GM: Meta wants to move all internet services into the metaverse. This is not realistic, because there will be, for example, no point in buying a train ticket in a virtual world. On the other hand, I think there will be not one, but many metaverses, depending on different uses.

One potential use is video games, which are already massively multi-user, but also virtual tourism, concerts, sports events, and e-commerce. A professional use allowing face-to-face meetings is also being considered. What the metaverse will bring to these experiences remains an open question, and there are sure to be many failures out of thousands of attempts. I am sure that we will see the emergence of meaningful uses that we have not yet thought of.

In any case, the metaverse will raise challenges of interoperability, i.e. the possibility of moving seamlessly from one universe to another. This will require the establishment of standards that do not yet exist and that should, as is often the case, be enforced by the largest players on the market.

What technological advances have made the development of these metaverses possible today?

GM: There have been notable material advances in graphics cards that offer significant display capabilities, and Virtual Reality headsets have reached a resolution equivalent to the limits of human eyesight. Combining these two technologies results in a wonderful contradiction.

On the one hand, the headsets work on a compromise; they must offer the largest possible field of view whilst still remaining light, small and energy self-sufficient. On the other hand, graphics cards are heat sinks. Therefore, in order to ensure the battery life of the headsets, the calculations behind the metaverse display have to be done on remote server farms before the images can be transferred. That’s where the 5G networks come in, whose potential for new applications, like the metaverse, is yet to be explored.

Could the metaverse support the development of new technologies that would increase immersion and interactivity?

GM: One way to increase the action of the user is to set them in motion. There is an interesting research topic on the development of multidirectional treadmills. This is a much more complicated problem than it seems, and it only takes the horizontal plane into account – so no slopes, steps, etc.

Otherwise, immersion is mainly achieved through sensory integration, i.e. our ability to feel all our senses at the same time and to detect inconsistencies. Currently, immersion systems only stimulate sight and hearing, but another sense that would be of interest in the metaverse is touch.

However, there are a number of challenges associated with so-called ‘haptic’ devices. Firstly, complex computer calculations must be performed to detect a user’s actions to the nearest millisecond, so that they can be felt without the feedback seeming strange and delayed. Secondly, there are technological challenges. The fantasy of an exoskeleton that responds strongly, quickly, and safely in a virtual world will never work. Beyond a certain level of power, robots must be kept in cages for safety reasons. Furthermore, we currently only know how to do force feedback on one point of the body – not yet on the whole thing.

Does that mean it is not possible to stimulate senses other than sight and hearing?

GM: Ultra-realism is not inevitable; it is possible to cheat and trick the brain by using sensory substitution, i.e. by mixing a little haptics with visual effects. By modifying the visual stimulus, it is possible to make haptic stimuli appear more diverse than they actually are. There is a lot of research to be done on this subject. As far as the other senses are concerned, we don’t know how to do very much. This is not a major problem for a typical audience, but it calls into question the accessibility of virtual worlds for people with disabilities.

One of the questions raised by the metaverse is its health impact. What effects might it have on our health?

GM: We know already that the effects of screens on our health are not insignificant. In 2021, the French National Agency for Food, Environmental and Occupational Health & Safety (ANSES) published a report specifically targeting the health impact of Virtual Reality, which is a crucial part of the metaverse. The prevalence of visual disorders and the risk of Virtual Reality Sickness – a simulation sickness that affects many people – will therefore be sure consequences of exposure to the metaverse.

We also know that virtual worlds can be used to influence people’s behavior. Currently, this has a positive goal and is being used for therapeutic purposes, including the treatment of certain phobias. However, it would be utopian to think that the opposite is not possible. For ethical and logical reasons, we cannot conduct research aiming to demonstrate that the technology can be used to cause harm. It will therefore be the uses that dictate the potentially harmful psychological impact of the metaverse.

Will the metaverses be used to capture more user data?

GM: Yes, that much is obvious. The owners and operators of the metaverse will be able to retrieve information on the direction of your gaze in the headset, or on the distance you have traveled, for example. It is difficult to say how this data will be used at the moment. However, the metaverse is going to make its use more widespread. Currently, each website has data on us, but this information is not linked together. In the metaverse, all this data will be grouped together to form even richer user profiles. This is the other side of the coin, i.e. the exploitation and monetization side. Moreover, given that the business model of an application like Facebook is based on the sale of targeted advertising, the virtual environment that the company wants to develop will certainly feed into a new advertising revolution.

What is missing to make the metaverse a reality?

GM: Technically, all the ingredients are there except perhaps the equipment for individuals. A Virtual Reality headset costs between €300 and €600 – an investment that is not accessible to everyone. There is, however, a plateau in technical improvement that could lower prices. In any case, this is a crucial element in the viability of the metaverse, which, let us not forget, is supposed to be a massively multi-user experience.

Anaïs Culot

CEM, champs électro-magnétiques, EMF, electromagnetic fields

How can we assess the health risks associated with exposure to electromagnetic fields?

As partners of the European SEAWave project, Télécom Paris and the C2M Chair are developing innovative measurement techniques to respond to public concern about the possible effects of cell phone usage. Funded by the EU to the tune of €8 million, the project will be launched in June 2022 for a period of 3 years. Interview with Joe Wiart, holder of the C2M Chair (Modeling, Characterization and Control of Electromagnetic Wave Exposure).

Could you remind us of the context in which the call for projects ‘Health and Exposure to Electromagnetic Fields (EMF)’ of the Horizon Europe program was launched?

Joe Wiart – The exponential use of wireless communication devices, throughout Europe, comes with a perceived risk associated with electromagnetic radiation, despite the existing protection thresholds (Recommendation 1999/519/CE and Directive 2013/35/UE). With the rollout of 5G, these concerns have multiplied. The Horizon Europe program will help to address these questions and concerns, and will study the possible impacts on specific populations, such as children and workers. It will intensify studies on millimeter-wave frequencies and investigate compliance analysis methods in these frequency ranges. The program will look at the evolution of electromagnetic exposure, as well as the contribution of exposure levels induced by 5G and new variable beam antennas. It will also investigate tools to better assess risks, communicate, and respond to concerns.

What is the challenge of SEAWave, one of the four selected projects, of which Télécom Paris is a partner?

JW – Currently, there is a lot of work, such as that of the ICNIRP (International Commission on Non-Ionizing Radiation Protection), that has been done to assess the compliance of radio-frequency equipment with protection thresholds. This work is largely based on conservative methods or models. SEAWave will contribute to these approaches in exposure to millimeter waves (with in vivo and in vitro studies). These approaches, by design, take the worst-case scenarios and overestimate the exposure. Yet, for a better control of possible impacts, as in epidemiological studies, and without underestimating conservative approaches, it is necessary to assess actual exposure. The work carried out by SEAWave will focus on establishing potentially new patterns of use, estimating associated exposure levels, and comparing them to existing patterns. Using innovative technology, the activities will focus on monitoring not only the general population, but also specific risk groups, such as children and workers.

What scientific contribution have Télécom Paris researchers made to this project that includes eleven Work Packages (WP)?

JW – The C2M Chair at Télécom Paris is involved in the work of four interdependent WPs, and is responsible for WP1 on EMF exposure in the context of the rollout of 5G. Among the eleven WPs, four are dedicated to millimeter waves and biomedical studies, and four others are dedicated to monitoring the exposure levels induced by 5G. The last three are dedicated to project management, but also to tools for risk assessment and communication. The researchers at Télécom Paris will mainly be taking part in the four WPs dedicated to monitoring the exposure levels induced by 5G. They will draw on measurement campaigns in Europe, networks of connected sensors, tools from artificial neural networks and, more generally, methods from Artificial Intelligence.

What are the scientific obstacles that need to be overcome?

JW – For a long time, assessing and monitoring exposure levels has been based on deterministic methods. With the increasing complexity of networks, like 5G, but also with the versatility of uses, these methods have reached their limits. It is necessary to develop new approaches based on the study of time series, statistical methods, and Artificial Intelligence tools applied to the dosimetry of radio frequency fields. Télécom Paris has been working in this field for many years; this expertise will be essential in overcoming the scientific obstacles that SEAWave will face.

The SEAWave consortium has around 15 partners. Who are they and what are your collaborations?

JW – These partners fall into three broad categories. The first is related to engineering: in addition to Télécom Paris, there is, for example, the Aristotle University of Thessaloniki (Greece), the Agenzia Nazionale per le Nuove Tecnologie, l’Energia e lo Sviluppo Economico Sostenibile (Italy), Schmid & Partner Engineering AG (Switzerland), the Foundation for Research on Information Technologies in Society (IT’IS, Switzerland), the Interuniversity Microelectronics Centre (IMEC, Belgium), and the CEA (France). The second category concerns biomedical aspects, with partners such as the IU Internationale Hochschule (Germany), Lausanne University Hospital (Switzerland), and the Fraunhofer-Institut für Toxikologie und Experimentelle Medizin (Germany). The last category is dedicated to risk management. It includes the International Agency for Research on Cancer (IARC, France), the Bundesamt für Strahlenschutz (Germany) and the French National Frequency Agency (ANFR, France).

We will mainly collaborate with partners such as the Aristotle University of Thessaloniki, the CEA, the IT’IS Foundation and the IMEC, but also with the IARC and the ANFR.

The project will end in 2025. In the long run, what are the expected results?

JW – First of all, tools to better control the risk and better assess the exposure levels induced by current and future wireless communication networks. All the measurements that will have been carried out will provide a good characterization of the exposure for specific populations (e.g. children, workers) and will lay the foundations for a European map of radio frequency exposure.

Interview by Véronique Charlet

cryptographie, nombres aléatoires, random numbers

Cryptography: what are the random numbers for?

Hervé Debar, Télécom SudParis – Institut Mines-Télécom and Olivier Levillain, Télécom SudParis – Institut Mines-Télécom

The original purpose of cryptography is to allow two parties (traditionally referred to as Alice and Bob) to exchange messages without another party (traditionally known as Eve) being able to read them. Alice and Bob will therefore agree on a method to exchange each message, M, in an encrypted form, C. Eve can observe the medium through which the encrypted message (or ciphertext) C is sent, but she cannot retrieve the information exchanged without knowing the necessary secret information, called the key.

This is a very old exercise, since we speak, for example, of the ‘Julius Caesar Cipher’. However, it has become very important in recent years, due to the increasing need to exchange information. Cryptography has therefore become an essential part of our everyday lives. Besides the exchange of messages, cryptographic mechanisms are used in many everyday objects to identify and authenticate users and their transactions. We find these mechanisms in phones, for example, to encrypt and authenticate communication between the telephone and radio antennas, or in car keys, and bank cards.

The internet has also popularized the ‘padlock’ in browsers to indicate that the communication between the browser and the server are protected by cryptographic mechanisms. To function correctly, these mechanisms require the use of random numbers, the quality (or more precisely, the unpredictability) thereof contributes to the security of the protocols.

Cryptographic algorithms

To transform a message M into an encrypted message C, by means of an algorithm A, keys are used. In so-called symmetric algorithms, we speak of secret keys (Ks), which are shared and kept secret by Alice and Bob. In symmetric algorithms, there are public (KPu) and private (KPr) key pairs. For each user, KPu is known to all, whereas KPr must be kept safe by its owner. Algorithm A is also public, which means that the secrecy of communication relies solely on the secrecy of the keys (secret or private).

Sometimes, the message M being transmitted is not important in itself, and the purpose of encrypting said message M is only to verify that the correspondent can decrypt it. This proof of possession of Ks or KPr can be used in some authentication schemes. In this case, it is important never to use the same message M more than once, since this would allow Eve to find out information pertaining to the keys. Therefore, it is necessary to generate a random message NA, which will change each time that Alice and Bob want to communicate.

The best known and probably most widely used example of this mechanism is the Diffie-Helman algorithm.  This algorithm allows a browser (Alice) and a website (Bob) to obtain an identical secret key K, different for each connection, by having exchanged their respective KPu beforehand. This process is performed, for example, when connecting to a retail website. This allows the browser and the website to exchange encrypted messages with a key that is destroyed at the end of each session. This means that there is no need to keep it (allowing for ease of use and security, since there is less chance of losing the key). It also means that not much traffic will be encrypted with the same key, which makes cryptanalysis attacks more difficult than if the same key were always used.

Generating random numbers

To ensure Eve is unable obtain the secret key, it is very important that she cannot guess the message NA. In practice, this message is often a large random number used in the calculations required by the chosen algorithm.

Initially, generating random variables was used for a lot of simulation work. To obtain relevant results, it is important not to repeat the simulation with the same parameters, but to repeat the simulation with different parameters hundreds or even thousands of times. The aim is to generate numbers that respect certain statistical properties, and that do not allow the sequence of numbers to be differentiated from a sequence that would be obtained by rolling dice, for example.

To generate a random number NA that can be used in these simulations, so-called pseudo-random generators are normally used, which apply a reprocessing algorithm to an initial value, known as the ‘seed’.  These pseudo-random generators aim to produce a sequence of numbers that resembles a random sequence, according to these statistical criteria. However, using the same seed twice will result in obtaining the same sequence twice.

The pseudo-random generator algorithm is usually public. If an attacker is able to guess the seed, he will be able to generate the random sequence and thus obtain the random numbers used by the cryptographic algorithms. In the specific case of cryptography, the attacker does not necessarily even need to know the exact value of the seed. If they are able to guess a set of values, this is enough to quickly calculate all possible keys and to crack the encryption.

In the 2000s, programmers used seeds that could be easily guessed, that were based on time, for example, making systems vulnerable. Since then, to avoid being able to guess the seed (or a set of values for the seed), operating systems rely on a mixture of the physical elements of the system (e.g. processing temperature, bus connections, etc.). These physical elements are impossible for an attacker to observe, and vary frequently, and therefore provide a good seed source for pseudo-random generators.

What about vulnerabilities?

Although the field is now well understood, random number generators are still sometimes subject to vulnerabilities. For example, between 2017 and 2021, cybersecurity researchers found 53 such vulnerabilities (CWE-338). This represents only a small number of software flaws (less than 1 in 1000). Several of these flaws, however, are of a high or critical level, meaning they can be used quite easily by attackers and are widespread.

A prime example in 2010 was Sony’s error on the PS3 software signature system. In this case, the reuse of a random variable for two different signatures allowed an attacker to find the manufacturer’s private key: it then became possible to install any software on the console, including pirated software and malware.

Between 2017 and 2021, flaws have also affected physical components, such as Intel Xeon processors, Broadcom chips used for communications and Qualcom SnapDragon processors embedded in mobile phones. These flaws affect the quality of random number generation.  For example, CVE-2018-5871 and CVE-2018-11290 relate to a seed generator whose periodicity is too short, i.e. that repeats the same sequence of seeds quickly. These flaws have been fixed and only affect certain functions of the hardware, which limits the risk.

The quality of random number generation is therefore a security issue. Operating systems running on newer processors (less than 10 years old) have random number generation mechanisms that are hardware-based. This generally ensures a good quality of the latter and thus the proper functioning of cryptographic algorithms, even if occasional vulnerabilities may arise. On the other hand, the difficulty is especially prominent in the case of connected objects, whose hardware capacities do not allow the implementation of random generators as powerful as those available on computers and smartphones, and which often prove to be more vulnerable.

Hervé Debar, Director of Research and Doctoral Training, Deputy Director, Télécom SudParis – Institut Mines-Télécom and Olivier Levillain, Assistant Professor, Télécom SudParis – Institut Mines-Télécom

This article has been republished from The Conversation under a Creative Commons license. Read the original article.

MP4 for Streaming

Streaming services are now part of our everyday life, and it’s all thanks to MP4. This computer standard allows videos to be played online and on various devices. Jean-Claude Dufourd and Jean Le Feuvre, researchers in Computer Science at Télécom Paris, have been recognized by the Emmy Awards Academy for their work on this computer format amongst other things.

In 2021 the File Format IT working group of the MPEG Committee received an Emmy Award for its work in developing ISOBMFF. Behind this term lies a computer format that was used as the basis for the development of MP4, the famous video standard we have all encountered when saving a file in the ‘.mp4’ format. “The Emmy’s decision to give an award to the File Format group is justified; this file format has had a great impact on the world of video by creating a whole ecosystem that brings together very different types of research,” explains Jean-Claude Dufourd, a computer scientist at Télécom Paris and a member of the File Format group.

MP4, which can capture sound and also video, “is used for live or on-demand media broadcasting, but not for the real-time broadcasting needed to stream games or video conferences,” explains Jean Le Feuvre, also a computer scientist at Télécom Paris and member of the File Format group. There are several features of this format that have contributed to its success, including the ability to capture long videos like movies, while still remaining very compact.

The smaller the file size, the easier they are to circulate on networks. The compactness of MP4 is therefore an advantage for streaming movies and series.  Another explanation for its success is its adaptability to different types of devices. “This technology can be used on a wide variety of everyday devices such as telephones, computers, and televisions,” explains Jean-Claude Dufourd. The reason that MP4 is playable on different devices is because “the HTTP file distribution protocol has been reused to distribute video,” says the researcher.

Improving streaming quality

The HTTP (Hypertext Transfer Protocol), which has been prevalent since the 1990s, is typically used to create websites. Researchers have modified this protocol so that it can be used to broadcast video files online. Their studies led to the development of HTTP streaming, and then to an improved version called DASH (Dynamic Adaptive Streaming over HTTP), a protocol that “cuts up the information in the MP4 file into chunks of a few seconds each,” says Jean-Claude Dufourd. The segments obtained at the end of this process are successfully retrieved by the player to reconstruct the movie or the episode of the series being watched.

This cutting process allows the playback of the video file to be adjusted according to the connection speed. “For each time range, different quality encoding is provided, and the media player is responsible for deciding which quality is best for its conditions of use,” explains Jean Le Feuvre. Typically, if a viewer’s connection speed is low, the streaming player will select the video file with the least amount of data in order to facilitate traffic. The player will therefore select the lowest streaming quality. This feature allows content to continue playing on the platform with minimal risk of interruption.

In order to achieve this ability to adapt to different usage scenarios, tests have been carried out by scientists and manufacturers. “Tests were conducted to determine the network profile of a phone and a computer,” explains Jean-Claude Dufourd. “The results showed that the profiles were very different depending on the device and the situation, so the content is not delivered with the same fluidity,” he adds.

Economic interests

“Today, we are benefiting from 15 years of technological refinement that have allowed us to make the algorithms efficient enough to stream videos,” says Jean-Claude Dufourd. Since the beginning of streaming, one of the goals has been to broadcast videos with the best possible quality, while also reducing loading lag and putting as little strain on the network capacity as possible.

The challenge is primarily economic; the more strain that streaming platforms put on network capacity to stream their content, the more they have to pay. Currently, people are studying how to reduce the broadcaster’s internet bill. One solution would be to circulate video files mainly among users, thereby creating a less centralized streaming system. This is what file sharing systems allow between users (P2P or Peer-to-Peer networks). This alternative is currently being considered by streaming companies, as it would reduce the cost of broadcasting content.  

Rémy Fauvel

Anne-Sophie Taillandier: New member of the Academy of technologies

Director of Teralab, IMT’s Big Data and AI platform, since 2015, Anne-Sophie Taillandier was elected member of the Academy of technologies in March 2022. This election is in recognition of her work developing projects on data and artificial intelligence at national and European level.

Newly elected to the Academy of Technologies, Anne-Sophie Taillandier has been Director of Teralab for seven years, a platform created by IMT in 2012 that specializes in Big Data and Artificial Intelligence. Anne-Sophie Taillandier was drawn towards a scientific occupation as she “always found mathematics enjoyable,” she says. “This led me to study science, first in an engineering school, at CentraleSupélec, and then to complete a doctoral thesis in Applied Mathematics at the ENS, which I defended in 1998,” she adds.

Once her thesis in Artificial Intelligence was completed, she joined Dassault Systèmes. “After my thesis, I wanted to see an immediate application of the things I had learned, so I joined Dassault Systèmes where I held various positions,” says Anne-Sophie Taillandier. During the ten years she spent at the well-known company, she contributed to the development of modeling software, worked in Human Resources, and led the Research & Development department of the brand Simulia. In 2008, she moved to an IT security company, and then in 2012 became Director of Technology at LTU Technologies, an image recognition software company, until 2015, when she took over the management of Teralab at IMT.

“It was the opportunity to work in a wide variety of fields while focusing on data, machine learning, and its applications that prompted me to join Teralab,” says Anne-Sophie Taillandier. Working with diverse companies requires “understanding a profession to grasp the meaning of the data that we are manipulating”. For the Director of Teralab, this experience mirrored that of her thesis, during which she had to understand the meaning of data provided by automotive engineers in order to manipulate it appropriately.

Communicating and explaining

In the course of her career, Anne-Sophie Taillandier realized “that there were language barriers, that there were sometimes difficulties in understanding each other”. She has taken a particular interest in these problems. “I’ve always found it interesting to take an educational approach to explain our work, to try to hide the computational and mathematical complexity in simple language,” says the Teralab director. “Since its inception, Teralab has aimed to facilitate the use of sophisticated technology, and to understand the professions of people who hold the data,” she says.

Teralab positions itself as an intermediary between companies and researchers so that they may understand each other and cooperate. In this project, it is necessary to make different disciplines work together. A technology watch is also important to remain up to date with the latest innovations, which can be better suited to a client’s needs. In addition, Teralab has seen new issues arise during its eight years of existence.

“We realized that the users who came to us in the beginning wanted to work on their own data, whereas today they want to work in an ecosystem that allows the circulation of their data. This raises issues of control over the use of their data, as well as of architecture and exchange standards,” points out Anne-Sophie Taillandier. The pooling of data held by different companies raises issues of confidentiality, as they may be in competition on certain points.  

European recognition

At TeraLab, we asked ourselves about data sharing between companies, which led us to the Gaia-X initiative”. In this European association, Teralab and other companies participate in the development of services to create a ‘cloud federation’. This is essential as a basis for enabling the flow of data, interoperability, and avoiding confining companies to ‘cloud’ solutions. Europe’s technological independence depends on these types of regulations and standards. Not only would companies be able to protect their digital assets and make informed choices, but they would also be able to share information with each other, under suitable conditions according to the sensitivity of their data.

In the development of Gaia-X federated services and the creation of data spaces, Teralab provides its technological and human resources to validate architecture, to prototype new services on sector-specific data spaces, and to build the open-source software layer that is essential to this development. “If EDF or another critical infrastructure, like banking, wants to be able to move sensitive data into these data spaces, they will need both technical and legal guarantees.”.

Teralab, since the end of the public funding that it received until 2018, has not stopped growing, especially at European level. “We currently have a European project on health-related data on cardiovascular diseases,” says the Teralab director. The goal is for researchers in European countries who need data on these diseases to be able to conduct research via a DataHub – a space for sharing data. In the future, Teralab’s goal is to continue its work in cloud federation and to “become a leading platform for the creation of digital ecosystems,” says Anne-Sophie Taillandier.

Rémy Fauvel