métavers, metaverse

What is the metaverse?

Although it is only in the prototype stage, the metaverse is already making quite a name for itself. This term, which comes straight out of a science fiction novel from the 1990s, now describes the concept of a connected virtual world, heralded as the future of the Internet. So what’s hiding on the other side of the metaverse? Guillaume Moreau, a Virtual Reality researcher at IMT Atlantique, explains.

How can we define the metaverse?

Guillaume Moreau: The metaverse offers an immersive and interactive experience in a virtual and connected world. Immersion is achieved through the use of technical devices, mainly Virtual Reality headsets, which allow you to feel present in an artificial world. This world can be imaginary, or a more or less faithful copy of reality, depending on whether we’re talking about an adventure video game or the reproduction of a museum, for example. The other key aspect is interaction. The user is a participant, so when they do something, the world around them immediately reacts.

The metaverse is not a revolution, but a democratization of Virtual Reality. Its novelty lies in the commitment of stakeholders like Meta, aka Facebook – a major investor in the concept – to turn experiences that were previously solitary or for small groups only into, massive, multi-user experiences – in other words, to simultaneously interconnect a large number of people in three-dimensional virtual worlds, and to monetize the whole concept. This raises questions of IT infrastructure, uses, ethics, and health.

What are its intended uses?

GM: Meta wants to move all internet services into the metaverse. This is not realistic, because there will be, for example, no point in buying a train ticket in a virtual world. On the other hand, I think there will be not one, but many metaverses, depending on different uses.

One potential use is video games, which are already massively multi-user, but also virtual tourism, concerts, sports events, and e-commerce. A professional use allowing face-to-face meetings is also being considered. What the metaverse will bring to these experiences remains an open question, and there are sure to be many failures out of thousands of attempts. I am sure that we will see the emergence of meaningful uses that we have not yet thought of.

In any case, the metaverse will raise challenges of interoperability, i.e. the possibility of moving seamlessly from one universe to another. This will require the establishment of standards that do not yet exist and that should, as is often the case, be enforced by the largest players on the market.

What technological advances have made the development of these metaverses possible today?

GM: There have been notable material advances in graphics cards that offer significant display capabilities, and Virtual Reality headsets have reached a resolution equivalent to the limits of human eyesight. Combining these two technologies results in a wonderful contradiction.

On the one hand, the headsets work on a compromise; they must offer the largest possible field of view whilst still remaining light, small and energy self-sufficient. On the other hand, graphics cards are heat sinks. Therefore, in order to ensure the battery life of the headsets, the calculations behind the metaverse display have to be done on remote server farms before the images can be transferred. That’s where the 5G networks come in, whose potential for new applications, like the metaverse, is yet to be explored.

Could the metaverse support the development of new technologies that would increase immersion and interactivity?

GM: One way to increase the action of the user is to set them in motion. There is an interesting research topic on the development of multidirectional treadmills. This is a much more complicated problem than it seems, and it only takes the horizontal plane into account – so no slopes, steps, etc.

Otherwise, immersion is mainly achieved through sensory integration, i.e. our ability to feel all our senses at the same time and to detect inconsistencies. Currently, immersion systems only stimulate sight and hearing, but another sense that would be of interest in the metaverse is touch.

However, there are a number of challenges associated with so-called ‘haptic’ devices. Firstly, complex computer calculations must be performed to detect a user’s actions to the nearest millisecond, so that they can be felt without the feedback seeming strange and delayed. Secondly, there are technological challenges. The fantasy of an exoskeleton that responds strongly, quickly, and safely in a virtual world will never work. Beyond a certain level of power, robots must be kept in cages for safety reasons. Furthermore, we currently only know how to do force feedback on one point of the body – not yet on the whole thing.

Does that mean it is not possible to stimulate senses other than sight and hearing?

GM: Ultra-realism is not inevitable; it is possible to cheat and trick the brain by using sensory substitution, i.e. by mixing a little haptics with visual effects. By modifying the visual stimulus, it is possible to make haptic stimuli appear more diverse than they actually are. There is a lot of research to be done on this subject. As far as the other senses are concerned, we don’t know how to do very much. This is not a major problem for a typical audience, but it calls into question the accessibility of virtual worlds for people with disabilities.

One of the questions raised by the metaverse is its health impact. What effects might it have on our health?

GM: We know already that the effects of screens on our health are not insignificant. In 2021, the French National Agency for Food, Environmental and Occupational Health & Safety (ANSES) published a report specifically targeting the health impact of Virtual Reality, which is a crucial part of the metaverse. The prevalence of visual disorders and the risk of Virtual Reality Sickness – a simulation sickness that affects many people – will therefore be sure consequences of exposure to the metaverse.

We also know that virtual worlds can be used to influence people’s behavior. Currently, this has a positive goal and is being used for therapeutic purposes, including the treatment of certain phobias. However, it would be utopian to think that the opposite is not possible. For ethical and logical reasons, we cannot conduct research aiming to demonstrate that the technology can be used to cause harm. It will therefore be the uses that dictate the potentially harmful psychological impact of the metaverse.

Will the metaverses be used to capture more user data?

GM: Yes, that much is obvious. The owners and operators of the metaverse will be able to retrieve information on the direction of your gaze in the headset, or on the distance you have traveled, for example. It is difficult to say how this data will be used at the moment. However, the metaverse is going to make its use more widespread. Currently, each website has data on us, but this information is not linked together. In the metaverse, all this data will be grouped together to form even richer user profiles. This is the other side of the coin, i.e. the exploitation and monetization side. Moreover, given that the business model of an application like Facebook is based on the sale of targeted advertising, the virtual environment that the company wants to develop will certainly feed into a new advertising revolution.

What is missing to make the metaverse a reality?

GM: Technically, all the ingredients are there except perhaps the equipment for individuals. A Virtual Reality headset costs between €300 and €600 – an investment that is not accessible to everyone. There is, however, a plateau in technical improvement that could lower prices. In any case, this is a crucial element in the viability of the metaverse, which, let us not forget, is supposed to be a massively multi-user experience.

Anaïs Culot

CEM, champs électro-magnétiques, EMF, electromagnetic fields

How can we assess the health risks associated with exposure to electromagnetic fields?

As partners of the European SEAWave project, Télécom Paris and the C2M Chair are developing innovative measurement techniques to respond to public concern about the possible effects of cell phone usage. Funded by the EU to the tune of €8 million, the project will be launched in June 2022 for a period of 3 years. Interview with Joe Wiart, holder of the C2M Chair (Modeling, Characterization and Control of Electromagnetic Wave Exposure).

Could you remind us of the context in which the call for projects ‘Health and Exposure to Electromagnetic Fields (EMF)’ of the Horizon Europe program was launched?

Joe Wiart – The exponential use of wireless communication devices, throughout Europe, comes with a perceived risk associated with electromagnetic radiation, despite the existing protection thresholds (Recommendation 1999/519/CE and Directive 2013/35/UE). With the rollout of 5G, these concerns have multiplied. The Horizon Europe program will help to address these questions and concerns, and will study the possible impacts on specific populations, such as children and workers. It will intensify studies on millimeter-wave frequencies and investigate compliance analysis methods in these frequency ranges. The program will look at the evolution of electromagnetic exposure, as well as the contribution of exposure levels induced by 5G and new variable beam antennas. It will also investigate tools to better assess risks, communicate, and respond to concerns.

What is the challenge of SEAWave, one of the four selected projects, of which Télécom Paris is a partner?

JW – Currently, there is a lot of work, such as that of the ICNIRP (International Commission on Non-Ionizing Radiation Protection), that has been done to assess the compliance of radio-frequency equipment with protection thresholds. This work is largely based on conservative methods or models. SEAWave will contribute to these approaches in exposure to millimeter waves (with in vivo and in vitro studies). These approaches, by design, take the worst-case scenarios and overestimate the exposure. Yet, for a better control of possible impacts, as in epidemiological studies, and without underestimating conservative approaches, it is necessary to assess actual exposure. The work carried out by SEAWave will focus on establishing potentially new patterns of use, estimating associated exposure levels, and comparing them to existing patterns. Using innovative technology, the activities will focus on monitoring not only the general population, but also specific risk groups, such as children and workers.

What scientific contribution have Télécom Paris researchers made to this project that includes eleven Work Packages (WP)?

JW – The C2M Chair at Télécom Paris is involved in the work of four interdependent WPs, and is responsible for WP1 on EMF exposure in the context of the rollout of 5G. Among the eleven WPs, four are dedicated to millimeter waves and biomedical studies, and four others are dedicated to monitoring the exposure levels induced by 5G. The last three are dedicated to project management, but also to tools for risk assessment and communication. The researchers at Télécom Paris will mainly be taking part in the four WPs dedicated to monitoring the exposure levels induced by 5G. They will draw on measurement campaigns in Europe, networks of connected sensors, tools from artificial neural networks and, more generally, methods from Artificial Intelligence.

What are the scientific obstacles that need to be overcome?

JW – For a long time, assessing and monitoring exposure levels has been based on deterministic methods. With the increasing complexity of networks, like 5G, but also with the versatility of uses, these methods have reached their limits. It is necessary to develop new approaches based on the study of time series, statistical methods, and Artificial Intelligence tools applied to the dosimetry of radio frequency fields. Télécom Paris has been working in this field for many years; this expertise will be essential in overcoming the scientific obstacles that SEAWave will face.

The SEAWave consortium has around 15 partners. Who are they and what are your collaborations?

JW – These partners fall into three broad categories. The first is related to engineering: in addition to Télécom Paris, there is, for example, the Aristotle University of Thessaloniki (Greece), the Agenzia Nazionale per le Nuove Tecnologie, l’Energia e lo Sviluppo Economico Sostenibile (Italy), Schmid & Partner Engineering AG (Switzerland), the Foundation for Research on Information Technologies in Society (IT’IS, Switzerland), the Interuniversity Microelectronics Centre (IMEC, Belgium), and the CEA (France). The second category concerns biomedical aspects, with partners such as the IU Internationale Hochschule (Germany), Lausanne University Hospital (Switzerland), and the Fraunhofer-Institut für Toxikologie und Experimentelle Medizin (Germany). The last category is dedicated to risk management. It includes the International Agency for Research on Cancer (IARC, France), the Bundesamt für Strahlenschutz (Germany) and the French National Frequency Agency (ANFR, France).

We will mainly collaborate with partners such as the Aristotle University of Thessaloniki, the CEA, the IT’IS Foundation and the IMEC, but also with the IARC and the ANFR.

The project will end in 2025. In the long run, what are the expected results?

JW – First of all, tools to better control the risk and better assess the exposure levels induced by current and future wireless communication networks. All the measurements that will have been carried out will provide a good characterization of the exposure for specific populations (e.g. children, workers) and will lay the foundations for a European map of radio frequency exposure.

Interview by Véronique Charlet

cryptographie, nombres aléatoires, random numbers

Cryptography: what are the random numbers for?

Hervé Debar, Télécom SudParis – Institut Mines-Télécom and Olivier Levillain, Télécom SudParis – Institut Mines-Télécom

The original purpose of cryptography is to allow two parties (traditionally referred to as Alice and Bob) to exchange messages without another party (traditionally known as Eve) being able to read them. Alice and Bob will therefore agree on a method to exchange each message, M, in an encrypted form, C. Eve can observe the medium through which the encrypted message (or ciphertext) C is sent, but she cannot retrieve the information exchanged without knowing the necessary secret information, called the key.

This is a very old exercise, since we speak, for example, of the ‘Julius Caesar Cipher’. However, it has become very important in recent years, due to the increasing need to exchange information. Cryptography has therefore become an essential part of our everyday lives. Besides the exchange of messages, cryptographic mechanisms are used in many everyday objects to identify and authenticate users and their transactions. We find these mechanisms in phones, for example, to encrypt and authenticate communication between the telephone and radio antennas, or in car keys, and bank cards.

The internet has also popularized the ‘padlock’ in browsers to indicate that the communication between the browser and the server are protected by cryptographic mechanisms. To function correctly, these mechanisms require the use of random numbers, the quality (or more precisely, the unpredictability) thereof contributes to the security of the protocols.

Cryptographic algorithms

To transform a message M into an encrypted message C, by means of an algorithm A, keys are used. In so-called symmetric algorithms, we speak of secret keys (Ks), which are shared and kept secret by Alice and Bob. In symmetric algorithms, there are public (KPu) and private (KPr) key pairs. For each user, KPu is known to all, whereas KPr must be kept safe by its owner. Algorithm A is also public, which means that the secrecy of communication relies solely on the secrecy of the keys (secret or private).

Sometimes, the message M being transmitted is not important in itself, and the purpose of encrypting said message M is only to verify that the correspondent can decrypt it. This proof of possession of Ks or KPr can be used in some authentication schemes. In this case, it is important never to use the same message M more than once, since this would allow Eve to find out information pertaining to the keys. Therefore, it is necessary to generate a random message NA, which will change each time that Alice and Bob want to communicate.

The best known and probably most widely used example of this mechanism is the Diffie-Helman algorithm.  This algorithm allows a browser (Alice) and a website (Bob) to obtain an identical secret key K, different for each connection, by having exchanged their respective KPu beforehand. This process is performed, for example, when connecting to a retail website. This allows the browser and the website to exchange encrypted messages with a key that is destroyed at the end of each session. This means that there is no need to keep it (allowing for ease of use and security, since there is less chance of losing the key). It also means that not much traffic will be encrypted with the same key, which makes cryptanalysis attacks more difficult than if the same key were always used.

Generating random numbers

To ensure Eve is unable obtain the secret key, it is very important that she cannot guess the message NA. In practice, this message is often a large random number used in the calculations required by the chosen algorithm.

Initially, generating random variables was used for a lot of simulation work. To obtain relevant results, it is important not to repeat the simulation with the same parameters, but to repeat the simulation with different parameters hundreds or even thousands of times. The aim is to generate numbers that respect certain statistical properties, and that do not allow the sequence of numbers to be differentiated from a sequence that would be obtained by rolling dice, for example.

To generate a random number NA that can be used in these simulations, so-called pseudo-random generators are normally used, which apply a reprocessing algorithm to an initial value, known as the ‘seed’.  These pseudo-random generators aim to produce a sequence of numbers that resembles a random sequence, according to these statistical criteria. However, using the same seed twice will result in obtaining the same sequence twice.

The pseudo-random generator algorithm is usually public. If an attacker is able to guess the seed, he will be able to generate the random sequence and thus obtain the random numbers used by the cryptographic algorithms. In the specific case of cryptography, the attacker does not necessarily even need to know the exact value of the seed. If they are able to guess a set of values, this is enough to quickly calculate all possible keys and to crack the encryption.

In the 2000s, programmers used seeds that could be easily guessed, that were based on time, for example, making systems vulnerable. Since then, to avoid being able to guess the seed (or a set of values for the seed), operating systems rely on a mixture of the physical elements of the system (e.g. processing temperature, bus connections, etc.). These physical elements are impossible for an attacker to observe, and vary frequently, and therefore provide a good seed source for pseudo-random generators.

What about vulnerabilities?

Although the field is now well understood, random number generators are still sometimes subject to vulnerabilities. For example, between 2017 and 2021, cybersecurity researchers found 53 such vulnerabilities (CWE-338). This represents only a small number of software flaws (less than 1 in 1000). Several of these flaws, however, are of a high or critical level, meaning they can be used quite easily by attackers and are widespread.

A prime example in 2010 was Sony’s error on the PS3 software signature system. In this case, the reuse of a random variable for two different signatures allowed an attacker to find the manufacturer’s private key: it then became possible to install any software on the console, including pirated software and malware.

Between 2017 and 2021, flaws have also affected physical components, such as Intel Xeon processors, Broadcom chips used for communications and Qualcom SnapDragon processors embedded in mobile phones. These flaws affect the quality of random number generation.  For example, CVE-2018-5871 and CVE-2018-11290 relate to a seed generator whose periodicity is too short, i.e. that repeats the same sequence of seeds quickly. These flaws have been fixed and only affect certain functions of the hardware, which limits the risk.

The quality of random number generation is therefore a security issue. Operating systems running on newer processors (less than 10 years old) have random number generation mechanisms that are hardware-based. This generally ensures a good quality of the latter and thus the proper functioning of cryptographic algorithms, even if occasional vulnerabilities may arise. On the other hand, the difficulty is especially prominent in the case of connected objects, whose hardware capacities do not allow the implementation of random generators as powerful as those available on computers and smartphones, and which often prove to be more vulnerable.

Hervé Debar, Director of Research and Doctoral Training, Deputy Director, Télécom SudParis – Institut Mines-Télécom and Olivier Levillain, Assistant Professor, Télécom SudParis – Institut Mines-Télécom

This article has been republished from The Conversation under a Creative Commons license. Read the original article.

MP4 for Streaming

Streaming services are now part of our everyday life, and it’s all thanks to MP4. This computer standard allows videos to be played online and on various devices. Jean-Claude Dufourd and Jean Le Feuvre, researchers in Computer Science at Télécom Paris, have been recognized by the Emmy Awards Academy for their work on this computer format amongst other things.

In 2021 the File Format IT working group of the MPEG Committee received an Emmy Award for its work in developing ISOBMFF. Behind this term lies a computer format that was used as the basis for the development of MP4, the famous video standard we have all encountered when saving a file in the ‘.mp4’ format. “The Emmy’s decision to give an award to the File Format group is justified; this file format has had a great impact on the world of video by creating a whole ecosystem that brings together very different types of research,” explains Jean-Claude Dufourd, a computer scientist at Télécom Paris and a member of the File Format group.

MP4, which can capture sound and also video, “is used for live or on-demand media broadcasting, but not for the real-time broadcasting needed to stream games or video conferences,” explains Jean Le Feuvre, also a computer scientist at Télécom Paris and member of the File Format group. There are several features of this format that have contributed to its success, including the ability to capture long videos like movies, while still remaining very compact.

The smaller the file size, the easier they are to circulate on networks. The compactness of MP4 is therefore an advantage for streaming movies and series.  Another explanation for its success is its adaptability to different types of devices. “This technology can be used on a wide variety of everyday devices such as telephones, computers, and televisions,” explains Jean-Claude Dufourd. The reason that MP4 is playable on different devices is because “the HTTP file distribution protocol has been reused to distribute video,” says the researcher.

Improving streaming quality

The HTTP (Hypertext Transfer Protocol), which has been prevalent since the 1990s, is typically used to create websites. Researchers have modified this protocol so that it can be used to broadcast video files online. Their studies led to the development of HTTP streaming, and then to an improved version called DASH (Dynamic Adaptive Streaming over HTTP), a protocol that “cuts up the information in the MP4 file into chunks of a few seconds each,” says Jean-Claude Dufourd. The segments obtained at the end of this process are successfully retrieved by the player to reconstruct the movie or the episode of the series being watched.

This cutting process allows the playback of the video file to be adjusted according to the connection speed. “For each time range, different quality encoding is provided, and the media player is responsible for deciding which quality is best for its conditions of use,” explains Jean Le Feuvre. Typically, if a viewer’s connection speed is low, the streaming player will select the video file with the least amount of data in order to facilitate traffic. The player will therefore select the lowest streaming quality. This feature allows content to continue playing on the platform with minimal risk of interruption.

In order to achieve this ability to adapt to different usage scenarios, tests have been carried out by scientists and manufacturers. “Tests were conducted to determine the network profile of a phone and a computer,” explains Jean-Claude Dufourd. “The results showed that the profiles were very different depending on the device and the situation, so the content is not delivered with the same fluidity,” he adds.

Economic interests

“Today, we are benefiting from 15 years of technological refinement that have allowed us to make the algorithms efficient enough to stream videos,” says Jean-Claude Dufourd. Since the beginning of streaming, one of the goals has been to broadcast videos with the best possible quality, while also reducing loading lag and putting as little strain on the network capacity as possible.

The challenge is primarily economic; the more strain that streaming platforms put on network capacity to stream their content, the more they have to pay. Currently, people are studying how to reduce the broadcaster’s internet bill. One solution would be to circulate video files mainly among users, thereby creating a less centralized streaming system. This is what file sharing systems allow between users (P2P or Peer-to-Peer networks). This alternative is currently being considered by streaming companies, as it would reduce the cost of broadcasting content.  

Rémy Fauvel

Anne-Sophie Taillandier: New member of the Academy of technologies

Director of Teralab, IMT’s Big Data and AI platform, since 2015, Anne-Sophie Taillandier was elected member of the Academy of technologies in March 2022. This election is in recognition of her work developing projects on data and artificial intelligence at national and European level.

Newly elected to the Academy of Technologies, Anne-Sophie Taillandier has been Director of Teralab for seven years, a platform created by IMT in 2012 that specializes in Big Data and Artificial Intelligence. Anne-Sophie Taillandier was drawn towards a scientific occupation as she “always found mathematics enjoyable,” she says. “This led me to study science, first in an engineering school, at CentraleSupélec, and then to complete a doctoral thesis in Applied Mathematics at the ENS, which I defended in 1998,” she adds.

Once her thesis in Artificial Intelligence was completed, she joined Dassault Systèmes. “After my thesis, I wanted to see an immediate application of the things I had learned, so I joined Dassault Systèmes where I held various positions,” says Anne-Sophie Taillandier. During the ten years she spent at the well-known company, she contributed to the development of modeling software, worked in Human Resources, and led the Research & Development department of the brand Simulia. In 2008, she moved to an IT security company, and then in 2012 became Director of Technology at LTU Technologies, an image recognition software company, until 2015, when she took over the management of Teralab at IMT.

“It was the opportunity to work in a wide variety of fields while focusing on data, machine learning, and its applications that prompted me to join Teralab,” says Anne-Sophie Taillandier. Working with diverse companies requires “understanding a profession to grasp the meaning of the data that we are manipulating”. For the Director of Teralab, this experience mirrored that of her thesis, during which she had to understand the meaning of data provided by automotive engineers in order to manipulate it appropriately.

Communicating and explaining

In the course of her career, Anne-Sophie Taillandier realized “that there were language barriers, that there were sometimes difficulties in understanding each other”. She has taken a particular interest in these problems. “I’ve always found it interesting to take an educational approach to explain our work, to try to hide the computational and mathematical complexity in simple language,” says the Teralab director. “Since its inception, Teralab has aimed to facilitate the use of sophisticated technology, and to understand the professions of people who hold the data,” she says.

Teralab positions itself as an intermediary between companies and researchers so that they may understand each other and cooperate. In this project, it is necessary to make different disciplines work together. A technology watch is also important to remain up to date with the latest innovations, which can be better suited to a client’s needs. In addition, Teralab has seen new issues arise during its eight years of existence.

“We realized that the users who came to us in the beginning wanted to work on their own data, whereas today they want to work in an ecosystem that allows the circulation of their data. This raises issues of control over the use of their data, as well as of architecture and exchange standards,” points out Anne-Sophie Taillandier. The pooling of data held by different companies raises issues of confidentiality, as they may be in competition on certain points.  

European recognition

At TeraLab, we asked ourselves about data sharing between companies, which led us to the Gaia-X initiative”. In this European association, Teralab and other companies participate in the development of services to create a ‘cloud federation’. This is essential as a basis for enabling the flow of data, interoperability, and avoiding confining companies to ‘cloud’ solutions. Europe’s technological independence depends on these types of regulations and standards. Not only would companies be able to protect their digital assets and make informed choices, but they would also be able to share information with each other, under suitable conditions according to the sensitivity of their data.

In the development of Gaia-X federated services and the creation of data spaces, Teralab provides its technological and human resources to validate architecture, to prototype new services on sector-specific data spaces, and to build the open-source software layer that is essential to this development. “If EDF or another critical infrastructure, like banking, wants to be able to move sensitive data into these data spaces, they will need both technical and legal guarantees.”.

Teralab, since the end of the public funding that it received until 2018, has not stopped growing, especially at European level. “We currently have a European project on health-related data on cardiovascular diseases,” says the Teralab director. The goal is for researchers in European countries who need data on these diseases to be able to conduct research via a DataHub – a space for sharing data. In the future, Teralab’s goal is to continue its work in cloud federation and to “become a leading platform for the creation of digital ecosystems,” says Anne-Sophie Taillandier.

Rémy Fauvel

All aboard for security

With France’s rail transport market opening up to competition, the SNCF’s security work is also becoming a service. This transformation raises questions on how security as an activity is organized. Florent Castagnino, Sociology researcher at IMT Atlantique, has studied how this service can adapt.  

2021 saw private train companies newly authorized to operate on French rails, previously monopolized by the SNCF. As well as opening its rail system to the competition, the state-owned company plans to offer security services to other companies. In the railway sector, “security is defined as the prevention of antisocial acts such as theft, fraud, attacks and assaults, whereas safety relates to the prevention of accidents such as issues with points or signals,” indicates Florent Castagnino, sociology researcher at IMT Atlantique.

While security is a preventive activity, it is also a commercial one. With the market opening up to competition, security services are sure to also become a profitable venture. This raises the question of whether a company prepared to pay more than another could obtain better security provision for its journeys and routes. Furthermore, with security guards in train stations, a company will not only regulate acts of malevolence but also reassure passengers. “Even if the trains are secure, a company may wish for agents to patrol the platform or on the train to enhance its brand image,” states Castagnino.

However the sale of security services to competing companies is a challenge for the distribution of agents across France. While certain stations or regional train lines may wish to purchase such services, they might not have access if there is too much demand from other companies or in other regions. Even if the SNCF security department is one of the departments that increases its workforce most regularly, “the question arises of how decisions will be made,” explains the researcher.

Representing complex problems

The distribution of personnel across the country represents a challenge due to the limited number of agents, but it also shows that delinquency is handled purely geographically. Analysis of databases of police reports and calls to emergency services from railway patrols and workers reveals the frequency of malicious acts, their nature and the place in which they occur. Using this information, patrols are sent to stations with the highest number of criminal acts reported.

Typically, if there are more malicious acts reported on line A than line B, more agents will be sent to line A. In this case, database analysis automatically ignores the multiple, complex origins of delinquency, focusing only on the phenomenon’s geographic aspect. As causes of delinquency are considered external to railway companies, they cannot take action as easily as for safety issues, which often have causes considered internal. This means that they are simpler to identify and resolve.

For Castagnino, making use of “databases for delinquency prevention means we imitate the way we handle accidental problems”. From the 90s, “there was a desire to make security more concrete, partly by using a model for the way in which we manage accidents,” continues the researcher.“This can be explained by the decision to apply the methods that work in one area, here safety, to another, in this case, security,” he adds. In the case of safety, if a technical fault such as a signaling failure is regularly reported on a certain kind of equipment, maintenance agents will be sent to repair the traffic lights on the railways concerned, and a general servicing of the equipment may be ordered. For security, if there is a station with many incidents reported, agents from the security department may be sent to the site to address the delinquency problem.

Regulatory surveillance

Most of the time, agents perform rounds to control delinquency. Their simple presence is generally enough to calm potentially disruptive groups. In train stations, “security guards self-regulate and expect social groups to do the same,” explains Castagnino. In a way, they serve as an anchor for groups to control themselves. If that does not work, the security forces can intervene to regulate them. The young researcher calls this process ‘regulatory surveillance’.

If for example, in a station, one or several individuals from a group start to bother someone, the other members of the group will often return them to order, in particular to maintain their collective right to remain in the station, which is seen as an important relational link. Regulatory surveillance also concerns military security forces. They sometimes operate in the same stations regularly, which means they get to know the groups that hang around inside. If a new agent is tempted to act aggressively without a clear reason, their colleagues (who know the place) can dissuade them, explaining that their intentions are disproportionate in relation to the group’s actions. This kind of relationship makes it possible to preserve good relations between agents and civilians.

In recent decades, several terrorist attacks in trains (Madrid in 2004, London in 2005, Thalys in 2015) have raised the question of introducing airport safety measures to the rail system. In particular, SNCF has brought in certain practices used in airports, such as the use of metal detectors in certain stations (particularly for Thalys trains). France’s national railway company is constantly working on an approach to the objectification of security threats, and seeking to make use of advantages provided by new technological tools.

Rémy Fauvel

[box type=”shadow” align=”” class=”” width=””]

Improving surveillance through automatic recognition?

Projects are currently underway to equip surveillance cameras with automatic image processing software. This would allow them to recognize suspicious behavior. “For now, these techniques are sub-optimal,” points out Castagnino. Certain cameras “don’t work or don’t have good video quality, specifically because they are too old,” indicates the researcher. To implement this technology, the camera fleet will therefore need to be updated.

[/box]

[box type=”shadow” align=”” class=”” width=””]

Challenges to the SNCF in the 21st century

Castagnino’s research on rail systems is published in “La SNCF à l’épreuve du XXIe siècle”, a collective work that discusses shifts in the French railway system and its recent evolution from a historic perspective.

[/box]

IN-TRACKS: tracking energy consumption

Nowadays, companies are encouraged to track their energy consumption, specifically to limit their greenhouse gas emissions. IN-TRACKS, a start-up created last November, has developed a smart dashboard to visualize energy usage.

With rising energy costs, these days it is useful for companies and individuals to identify their excessive uses of energy,” says Léo-Paul Keyser, co-founder of In-Tracks.  In November 2021, the IMT Nord Europe graduate created the start-up with Stéphane Flandre, former Enedis executive. The young company, incubated at IMT Nord Europe, provides monitoring solutions to companies to track their energy consumption via a digital dashboard. The software, which can be accessed via computer, smartphone or tablet, displays a range of information pertaining to energy usage.

Typically, graphics show consumption according to peak and off-peak times, depending on the surface area of a room, house or building. “A company may have several premises for which it wishes to track energy consumption,” says Keyser. “The dashboard provides maps and filters that make it possible to select regions, sites, or kinds of equipment for which the customer wishes to compare energy performance,” he adds.

For advice and clarification, “clients can request a videocall so we can help them interpret and understand their data, with our engineers’ expertise,” continues the start-up’s co-founder. The young company also wishes to develop a system of recommendations based on artificial intelligence, analyzing consumption data and sending advice on the dashboard.

The load curve: a tool for analysis

The specificities of clients’ energy use are defined based on load curves: graphics generated by smart electricity and gas meters, which show evolutions in energy consumption over a set period. To obtain access to these curves, In-Tracks has created an agreement with EDF and Enedis. “With the data from the meters received every hour or five minutes, we can try to track the client’s consumption in real-time,” explains the young entrepreneur.

The shorter the intervals at which data is received from the meter, typically every few seconds rather than every few minutes, the more detailed the load curve will be and the more relevant the analysis in diagnosing energy use. By comparing a place’s load curve values to its expected energy consumption threshold, excessive uses can be identified.

Thanks to the dashboard, clients can identify sub-optimal zones and equipment. They can therefore not only reduce their energy bills but also their carbon footprint, i.e. their greenhouse gas emissions, which are also evaluated by the start-up. To do so, the company bases itself on the quantity of fossil energies used, for instance, with natural gas or biodiesel. “We plan to use programs that make it possible to track variations in the energy mix in real-time, in order to get our results as close as possible to reality,” Keyser remarks.

An application accessible to individuals

The young company plans to make an application available to individuals, allowing them to visualize consumption data from smart meters. The software will display the same data as for companies, specifically, energy consumption over time, according to the surface area. In-Tracks also wishes to add a fun side to its application.

“For example, we would like to set up challenges between friends and family members, for everyone to reduce their energy consumption,” explains Keyser. “The aim is to make the subject of energy consumption something that’s fun rather than a source of restrictions,” he adds. To develop this aspect, the start-up is working with students from IMT Nord Europe. The young company is also undertaking research into the Internet of Things, in order to create data analysis methods that make it possible to identify energy issues even more specifically.

Rémy Fauvel

Preparing the networks of the future

Whether in the medical, agricultural or academic sectors, the Intelligence of Things will become a part of many areas in society. However at this point, this sector, sometimes known as Web 3.0, faces many challenges. How can we make objects communicate with each other, no matter how different they might be? With the Semantic Web, a window and a thermometer can be connected, or a CO2 sensor and a computer. Research in this area of the web also aims to open up the technological borders that separate networks, just like research around the Open RAN. This network architecture, based on free software, could put an end to the domination of the small number of equipment manufacturers that telecommunications operators rely on.

However, the number of devices accessing networks is constantly rising. This trend risks making the movement of data more complicated and generating interference, or information jamming phenomena, which are caused in particular by the number of connected devices. By exploring the nature of different kinds of interference and their unique features, we can deal with them and limit them more efficiently.

Furthermore, interference also occurs in alternative telecommunications systems, like Non-Orthogonal Multiple Access (NOMA). While this system makes it possible to host more users on networks, as frequency sub-bands are shared better, interference still presents an intrinsic problem. All these challenges must be overcome for networks to be able to interconnect efficiently and facilitate data-sharing between consumers, data processors, storage centers and authorities in the future.

NOMA

Better network-sharing with NOMA

The rise in the number of connected devices will lead to increased congestion of frequencies available for data circulation. Non-Orthogonal Multiple Access (NOMA) is one of the techniques currently being studied to improve the hosting capacity of networks and avoid their saturation.

To access the internet, a mobile phone must exchange information with base stations, devices commonly known as relay antennas. These data exchanges operate on frequency sub-bands, channels specific to each base station. To host multiple connected device users, a channel is attributed to each user. With the rise in the number of connected objects, there will not be enough sub-bands available to host them all.

To mitigate this problem, Catherine Douillard and Charbel Abdel Nour, telecommunications researchers at IMT Atlantique, have been working on NOMA: a system that places multiple users on the same channel, unlike the current system. “Rather than allocating a frequency band to each user, device signals are superposed on the same frequency band,” explains Douillard.

Sharing resources

The essential idea of NOMA involves making a single antenna work to serve multiple users at the same time,” says Abdel Nour. And to go even further, the researchers are working on the Power-Domain NOMA, “an approach that aims to separate users sharing the same frequency on one or more antennas, according to their transmitting power,” continues Douillard. This system provides more equitable access to spectrum resources and available antennas across users. Typically, when a device encounters difficulties in accessing the network, it may try to access a resource already occupied by another user. However, the antenna transmitting power will be adapted so that the information sent by the device successfully arrives at its destination, while limiting ‘disturbances’ for the user.

Superposing multiple users on the same resource will cause problems in accessing it. For communication to work, the signals sent by the machine need to be received at sufficiently different strengths, so that the antennas can identify them. If the signal strengths are similar, the antennas will mix them up. This can cause interference, in other words, information jamming phenomena, which can hinder the smooth playing of a video or a round of an online game.

Interference: an intrinsic problem for NOMA

To avoid interference, receivers are fitted with decoders, which differentiate between signals according to their reception quality. When the antenna receives the signals, it identifies the one with the best reception quality and proceeds to extract the signal received. It will then recover the lower-quality signal. Once the signals are identified, the base station gives each one access to the network. “This means of handling interference is quite simple to implement in the case of two signals, but much less so when there are many,” states Douillard.

 “To handle interference, there are two main possibilities,” explains Abdel Nour. “One involves canceling the interference, or in other words, the device receivers detect which signals are not intended for them and eliminate them, keeping only those sent to them,” adds the researcher. This approach can be facilitated by interference models, namely those studied at IMT Nord Europe. The second solution involves making the antennas work together. By exchanging information about the quality of connections, they can implement algorithms to determine which devices should be served by NOMA, while avoiding interference from their signals.

Intelligent allocation

We are trying to ensure that resource allocation techniques adapt to user needs, while adjusting the power they need, with no excess,” states Abdel Nour. According to the number of users and the applications being used, the number of antennas in play will vary. If a lot of machines are trying to access the network, multiple antennas can be used at the same time. In the opposite situation, a single antenna could be enough.

Thanks to algorithms, the base stations learn to recognize the different characteristics of devices, like the kinds of applications being used when the device is connected. This allows the intensity of the signal emitted by the antennas to be adapted, in order to serve users appropriately. For example, a streaming service will need a higher bit rate, and therefore a stronger transmitting power than a messaging application.

One of the challenges is to design high-performing algorithms that are energy-efficient,” explains Abdel Nour. By reducing energy consumption, the objective is to generate lower operating costs than the current network architecture, while allowing for a significant rise in the number of connected users. NOMA and other research into interference are part of an overall approach to increase network hosting capability. With the developments in the Internet of Things in particular, this work will prove to be necessary to avoid information traffic jams.

Rémy Fauvel

interference

Interference: a source of telecommunications problems

The growing number of connected objects is set to cause a concurrent increase in interference, a phenomenon which has remained an issue since the birth of telecommunications. In the past decade, more and more research has been undertaken in this area, leading us to revisit the way in which devices handle interference.

Throughout the history of telecommunications, we have observed an increase in the quantities of information being exchanged,” states Laurent Clavier, telecommunications researcher at IMT Nord Europe. “This phenomenon can be explained by network densification in particular,” adds the researcher. The increase in the amount of data circulating is paired with a rise in interference, which represents a problem for network operations.

To understand what interference is, first, we need to understand what a receiver is. In the field of telecommunications, a receiver is a device that converts a signal into usable information — like an electromagnetic wave into a voice. Sometimes, undesired signals disrupt the functioning of a receiver and damage the communication between several devices. This phenomenon is known as interference and the undesired signal, noise. It can cause voice distortion during a telephone call, for example.

Interference occurs when multiple machines use the same frequency band at the same time. To avoid interference, receivers choose which signals they pick up and which they drop. While telephone networks are organized to avoid two smartphones interfering with each other, this is not the case for the Internet of Things, where interference is becoming critical.

Read on I’MTech: Better network-sharing with NOMA

Different kinds of noise causing interference

With the boom in the number of connected devices, the amount of interference will increase and cause the network to deteriorate. By improving machine receivers, it appears possible to mitigate this damage. Most connected devices are equipped with receivers adapted for Gaussian noise. These receivers make the best decisions possible as long as the signal received is powerful enough.

By studying how interference occurs, scientists have understood that it does not follow a Gaussian model, but rather an impulsive one. “Generally, there are very few objects that function together at the same time as ours and near our receiver,” explains Clavier. “Distant devices generate weak interference, whereas closer devices generate strong interference: this is the phenomenon that characterizes impulsive interference,” he specifies.

Reception strategies implemented for Gaussian noise do not account for the presence of these strong noise values. They are therefore easily misled by impulsive noise, with receivers no longer able to recover the useful information. “By designing receivers capable of processing the different kinds of interference that occur in real life, the network will be more robust and able to host more devices,” adds the researcher.

Adaptable receivers

For a receiver to be able to understand Gaussian and non-Gaussian noise, it needs to be able to identify its environment. If a device receives a signal that it wishes to decode while the signal of another nearby device is generating interference, it will use an impulsive model to deal with the interference and decode the useful signal properly. If it is in an environment in which the devices are all relatively far away, it will analyze the interference with a Gaussian model.

To correctly decode a message, the receiver must adapt its decision-making rule to the context. To do so, Clavier indicates that a “receiver may be equipped with mechanisms that allow it to calculate the level of trust in the data it receives in a way that is adapted to the properties of the noise. It will therefore be capable of adapting to both Gaussian and impulsive noise.” This method, used by the researcher to design receivers, means that the machine does not have to automatically know its environment.

Currently, industrial actors are not particularly concerned with the nature of interference. However, they are interested in the means available to avoid it. In other words, they do not see the usefulness of questioning the Gaussian model and undertaking research into the way in which interference is produced. For Clavier, this lack of interest will be temporary, and “in time, we will realize that we will need to use this kind of receiver in devices,” he notes. “From then on, engineers will probably start to include these devices more and more in the tools they develop,” the researcher hopes.

Rémy Fauvel