Posts

métavers, metaverse

What is the metaverse?

Although it is only in the prototype stage, the metaverse is already making quite a name for itself. This term, which comes straight out of a science fiction novel from the 1990s, now describes the concept of a connected virtual world, heralded as the future of the Internet. So what’s hiding on the other side of the metaverse? Guillaume Moreau, a Virtual Reality researcher at IMT Atlantique, explains.

How can we define the metaverse?

Guillaume Moreau: The metaverse offers an immersive and interactive experience in a virtual and connected world. Immersion is achieved through the use of technical devices, mainly Virtual Reality headsets, which allow you to feel present in an artificial world. This world can be imaginary, or a more or less faithful copy of reality, depending on whether we’re talking about an adventure video game or the reproduction of a museum, for example. The other key aspect is interaction. The user is a participant, so when they do something, the world around them immediately reacts.

The metaverse is not a revolution, but a democratization of Virtual Reality. Its novelty lies in the commitment of stakeholders like Meta, aka Facebook – a major investor in the concept – to turn experiences that were previously solitary or for small groups only into, massive, multi-user experiences – in other words, to simultaneously interconnect a large number of people in three-dimensional virtual worlds, and to monetize the whole concept. This raises questions of IT infrastructure, uses, ethics, and health.

What are its intended uses?

GM: Meta wants to move all internet services into the metaverse. This is not realistic, because there will be, for example, no point in buying a train ticket in a virtual world. On the other hand, I think there will be not one, but many metaverses, depending on different uses.

One potential use is video games, which are already massively multi-user, but also virtual tourism, concerts, sports events, and e-commerce. A professional use allowing face-to-face meetings is also being considered. What the metaverse will bring to these experiences remains an open question, and there are sure to be many failures out of thousands of attempts. I am sure that we will see the emergence of meaningful uses that we have not yet thought of.

In any case, the metaverse will raise challenges of interoperability, i.e. the possibility of moving seamlessly from one universe to another. This will require the establishment of standards that do not yet exist and that should, as is often the case, be enforced by the largest players on the market.

What technological advances have made the development of these metaverses possible today?

GM: There have been notable material advances in graphics cards that offer significant display capabilities, and Virtual Reality headsets have reached a resolution equivalent to the limits of human eyesight. Combining these two technologies results in a wonderful contradiction.

On the one hand, the headsets work on a compromise; they must offer the largest possible field of view whilst still remaining light, small and energy self-sufficient. On the other hand, graphics cards are heat sinks. Therefore, in order to ensure the battery life of the headsets, the calculations behind the metaverse display have to be done on remote server farms before the images can be transferred. That’s where the 5G networks come in, whose potential for new applications, like the metaverse, is yet to be explored.

Could the metaverse support the development of new technologies that would increase immersion and interactivity?

GM: One way to increase the action of the user is to set them in motion. There is an interesting research topic on the development of multidirectional treadmills. This is a much more complicated problem than it seems, and it only takes the horizontal plane into account – so no slopes, steps, etc.

Otherwise, immersion is mainly achieved through sensory integration, i.e. our ability to feel all our senses at the same time and to detect inconsistencies. Currently, immersion systems only stimulate sight and hearing, but another sense that would be of interest in the metaverse is touch.

However, there are a number of challenges associated with so-called ‘haptic’ devices. Firstly, complex computer calculations must be performed to detect a user’s actions to the nearest millisecond, so that they can be felt without the feedback seeming strange and delayed. Secondly, there are technological challenges. The fantasy of an exoskeleton that responds strongly, quickly, and safely in a virtual world will never work. Beyond a certain level of power, robots must be kept in cages for safety reasons. Furthermore, we currently only know how to do force feedback on one point of the body – not yet on the whole thing.

Does that mean it is not possible to stimulate senses other than sight and hearing?

GM: Ultra-realism is not inevitable; it is possible to cheat and trick the brain by using sensory substitution, i.e. by mixing a little haptics with visual effects. By modifying the visual stimulus, it is possible to make haptic stimuli appear more diverse than they actually are. There is a lot of research to be done on this subject. As far as the other senses are concerned, we don’t know how to do very much. This is not a major problem for a typical audience, but it calls into question the accessibility of virtual worlds for people with disabilities.

One of the questions raised by the metaverse is its health impact. What effects might it have on our health?

GM: We know already that the effects of screens on our health are not insignificant. In 2021, the French National Agency for Food, Environmental and Occupational Health & Safety (ANSES) published a report specifically targeting the health impact of Virtual Reality, which is a crucial part of the metaverse. The prevalence of visual disorders and the risk of Virtual Reality Sickness – a simulation sickness that affects many people – will therefore be sure consequences of exposure to the metaverse.

We also know that virtual worlds can be used to influence people’s behavior. Currently, this has a positive goal and is being used for therapeutic purposes, including the treatment of certain phobias. However, it would be utopian to think that the opposite is not possible. For ethical and logical reasons, we cannot conduct research aiming to demonstrate that the technology can be used to cause harm. It will therefore be the uses that dictate the potentially harmful psychological impact of the metaverse.

Will the metaverses be used to capture more user data?

GM: Yes, that much is obvious. The owners and operators of the metaverse will be able to retrieve information on the direction of your gaze in the headset, or on the distance you have traveled, for example. It is difficult to say how this data will be used at the moment. However, the metaverse is going to make its use more widespread. Currently, each website has data on us, but this information is not linked together. In the metaverse, all this data will be grouped together to form even richer user profiles. This is the other side of the coin, i.e. the exploitation and monetization side. Moreover, given that the business model of an application like Facebook is based on the sale of targeted advertising, the virtual environment that the company wants to develop will certainly feed into a new advertising revolution.

What is missing to make the metaverse a reality?

GM: Technically, all the ingredients are there except perhaps the equipment for individuals. A Virtual Reality headset costs between €300 and €600 – an investment that is not accessible to everyone. There is, however, a plateau in technical improvement that could lower prices. In any case, this is a crucial element in the viability of the metaverse, which, let us not forget, is supposed to be a massively multi-user experience.

Anaïs Culot

CEM, champs électro-magnétiques, EMF, electromagnetic fields

How can we assess the health risks associated with exposure to electromagnetic fields?

As partners of the European SEAWave project, Télécom Paris and the C2M Chair are developing innovative measurement techniques to respond to public concern about the possible effects of cell phone usage. Funded by the EU to the tune of €8 million, the project will be launched in June 2022 for a period of 3 years. Interview with Joe Wiart, holder of the C2M Chair (Modeling, Characterization and Control of Electromagnetic Wave Exposure).

Could you remind us of the context in which the call for projects ‘Health and Exposure to Electromagnetic Fields (EMF)’ of the Horizon Europe program was launched?

Joe Wiart – The exponential use of wireless communication devices, throughout Europe, comes with a perceived risk associated with electromagnetic radiation, despite the existing protection thresholds (Recommendation 1999/519/CE and Directive 2013/35/UE). With the rollout of 5G, these concerns have multiplied. The Horizon Europe program will help to address these questions and concerns, and will study the possible impacts on specific populations, such as children and workers. It will intensify studies on millimeter-wave frequencies and investigate compliance analysis methods in these frequency ranges. The program will look at the evolution of electromagnetic exposure, as well as the contribution of exposure levels induced by 5G and new variable beam antennas. It will also investigate tools to better assess risks, communicate, and respond to concerns.

What is the challenge of SEAWave, one of the four selected projects, of which Télécom Paris is a partner?

JW – Currently, there is a lot of work, such as that of the ICNIRP (International Commission on Non-Ionizing Radiation Protection), that has been done to assess the compliance of radio-frequency equipment with protection thresholds. This work is largely based on conservative methods or models. SEAWave will contribute to these approaches in exposure to millimeter waves (with in vivo and in vitro studies). These approaches, by design, take the worst-case scenarios and overestimate the exposure. Yet, for a better control of possible impacts, as in epidemiological studies, and without underestimating conservative approaches, it is necessary to assess actual exposure. The work carried out by SEAWave will focus on establishing potentially new patterns of use, estimating associated exposure levels, and comparing them to existing patterns. Using innovative technology, the activities will focus on monitoring not only the general population, but also specific risk groups, such as children and workers.

What scientific contribution have Télécom Paris researchers made to this project that includes eleven Work Packages (WP)?

JW – The C2M Chair at Télécom Paris is involved in the work of four interdependent WPs, and is responsible for WP1 on EMF exposure in the context of the rollout of 5G. Among the eleven WPs, four are dedicated to millimeter waves and biomedical studies, and four others are dedicated to monitoring the exposure levels induced by 5G. The last three are dedicated to project management, but also to tools for risk assessment and communication. The researchers at Télécom Paris will mainly be taking part in the four WPs dedicated to monitoring the exposure levels induced by 5G. They will draw on measurement campaigns in Europe, networks of connected sensors, tools from artificial neural networks and, more generally, methods from Artificial Intelligence.

What are the scientific obstacles that need to be overcome?

JW – For a long time, assessing and monitoring exposure levels has been based on deterministic methods. With the increasing complexity of networks, like 5G, but also with the versatility of uses, these methods have reached their limits. It is necessary to develop new approaches based on the study of time series, statistical methods, and Artificial Intelligence tools applied to the dosimetry of radio frequency fields. Télécom Paris has been working in this field for many years; this expertise will be essential in overcoming the scientific obstacles that SEAWave will face.

The SEAWave consortium has around 15 partners. Who are they and what are your collaborations?

JW – These partners fall into three broad categories. The first is related to engineering: in addition to Télécom Paris, there is, for example, the Aristotle University of Thessaloniki (Greece), the Agenzia Nazionale per le Nuove Tecnologie, l’Energia e lo Sviluppo Economico Sostenibile (Italy), Schmid & Partner Engineering AG (Switzerland), the Foundation for Research on Information Technologies in Society (IT’IS, Switzerland), the Interuniversity Microelectronics Centre (IMEC, Belgium), and the CEA (France). The second category concerns biomedical aspects, with partners such as the IU Internationale Hochschule (Germany), Lausanne University Hospital (Switzerland), and the Fraunhofer-Institut für Toxikologie und Experimentelle Medizin (Germany). The last category is dedicated to risk management. It includes the International Agency for Research on Cancer (IARC, France), the Bundesamt für Strahlenschutz (Germany) and the French National Frequency Agency (ANFR, France).

We will mainly collaborate with partners such as the Aristotle University of Thessaloniki, the CEA, the IT’IS Foundation and the IMEC, but also with the IARC and the ANFR.

The project will end in 2025. In the long run, what are the expected results?

JW – First of all, tools to better control the risk and better assess the exposure levels induced by current and future wireless communication networks. All the measurements that will have been carried out will provide a good characterization of the exposure for specific populations (e.g. children, workers) and will lay the foundations for a European map of radio frequency exposure.

Interview by Véronique Charlet

Metaverse, nouvelles technologies

Debate: The Metaverse, flying taxis and other weapons of mass planetary destruction

Fabrice Flipo, Institut Mines-Télécom Business School

5G, 8K, flying taxis and the Metaverse are all topics of great interest, raising many questions. However, such questions are rarely, if ever, from an environmental perspective.

A recent article from French daily newspaper Le Monde, published October 18 2021 and titled “Facebook to hire 10,000 people in Europe to create the Metaverse”, discusses employment, the location of the innovation production site, “use cases” of this application and the experiences it will provide. However, the risks highlighted only relate to addiction or the rights of individuals in the Metaverse.

There is the same narrative framing on the topic of flying taxis, providing promises on the one hand and focusing on the user experience on the other.

In Toulouse, Airbus presents its flying taxi scheduled for 2023 (AFP, 2021).

However, the connection is never made between these initiatives and their potential impact on the biosphere. To find such a connection, you have to go to the “Environment” or “Books” sections of Le Monde: there, consumers are blamed for watching too many videos or sending too many emails.

This means of “compartmentalizing” debates and issues is nothing new – if you flick through old editions of Le Monde, you can find it again and again.

Hype technologies vs punitive environmentalism

Regulation works in the same way. On one side are laws and directives organizing the growth of digital technology and its applications; on the other are those that investigate the environmental implications of such technology, managed by other agencies, such as ADEME (the French Agency for Ecological Transition).

One of the main consequences of this division is making environmentalism appear “punitive”. On the one hand, we have technological innovations and related hype, promising new experiences, fun, happiness and incredible achievements. And on the other, the issue of the environment; discussing waste, energy efficiency, the destruction of the planet and other “depressing”, “boring” issues.

This also holds true for research: researchers with “new, good” technology are placed in the front row, with others left at the back. This is how the mediator at France Info explained that footballer Lionel Messi’s move to Paris Saint-Germain was “worth” more airtime than the report from the IPCC – the first topic was a longer-running story while the IPCC report was a one-off event.

Obliterating a more minimalist approach

Another consequence is that environmental regulation remains largely confined to the area of “energy efficiency”, a technical term referring to the amount of resources and energy needed to manufacture a good or provide a service.

This approach overshadows others, which are essential for the environmental transition – namely, approaches related to using less. Such approaches raise the question of whether we really need a certain good or service. Whether we are talking about 8K or 5G, the not-for-profit Shift Project questions the usefulness of these technologies in light of their forecasted effects on the planet.

The third consequence of this separation between digital expansion and environmental impact is that environmental policy is always struggling to catch up. We see this every day: despite regulation, the digital sector’s environmental impact continues to grow. Technologies are developed for millions or even billions of dollars. And only afterwards does the environmental question get raised. But by then, it is already too late!

Widespread dependencies… that could have been predicted

However, in a large number of cases, the effects of these projects are foreseeable – we can see well in advance which ideas will be disastrous, or at least highly problematic.

Thinking about this early on means we can avoid situations of technological lock-in, such as the widespread dependency on cars or smartphones in our current lifestyles. These are situations that are hard to get out of, as they require coordinating a change in infrastructure and habits, just like the use of bikes in cities “versus” cars.

We can see these easy-to-predict consequences with 5G, 8K, flying taxis and the Metaverse.

For example, 5G is designed to allow for a large increase in data transfer, but comes at a huge energetic cost, even if we have achieved increases in energy efficiency in this area since the 1950s that are just as significant. As emphasized by the Shift Project in their report, though increases in efficiency are stable at the technical level, they cannot compensate for the rise in data…

This reasoning also applies to 8K and the Metaverse, which is basically a conceptually similar, improved version of Second Life, a digital universe launched in 2003 that still exists today. At the time, technology specialist Nicholas Carr remarked that a Second Life avatar consumes about as much energy as the average Brazilian.

Works of fiction such as Virtual Revolution (2016) depict a world in which the Metaverse will absorb a key part of our social interactions, in the same way that social media is a major vehicle for daily conversations nowadays.

It is easy to predict the amount of information that will need to be produced and processed, compared with what exists already. IT company Cisco warns that these universes could easily become the biggest source of traffic on the internet.

As for flying taxis, their aim is to find space in the air that has been “lost” on the ground: in short, to clog up one of the last remaining spaces, despite the fact that moving up and down generally uses more energy than moving horizontally, due to gravity.

Our relationship to nature

We can see that it is not hard to establish the connection between technological innovation and the environmental situation, there is no conceptual difficulty here. And environmentalism does not always have to be lagging behind.

Back in Marx’s time, he explained that the question of humans’ relationship to nature is technical and so are our choices. It goes far beyond taking a blissful weekend stroll and admiring supposedly untouched areas…

Environmentalists have long been arguing that certain technological choices are incompatible with conditions for a good life on Earth. But these problems are formulated in the public sphere in a compartmentalized way, which prevents any serious discussion.

So what is the blockage?

Environmentalism does not have to be a “punitive” issue. Bike-riding, local products, renewable energy, insulation, DIY, and more… There are many environmental initiatives that can be discussed in the public sphere, as long as the various possible avenues are appropriately addressed.

So where is the issue? Why does hype benefit so many projects, when we can easily show that they will post huge problems once deployed on a certain scale? There are several explanations.

Tech projects receive the most funding, and are capable of a huge amount of impact in terms of persuasive power. They make use of marketing, surveys and other tools, perfectly dosing and precisely targeting their storytelling to reach the most receptive audiences, before progressively expanding to new fringes of the population, until they achieve saturation.

These selectively edited stories are also a part of the broader history of developed societies and their journey to create the most capital-intensive technologies, as Marx showed as early as 1867, emphasizing the effects of expanded reproduction of capital. Socialism also placed plenty of hope in this “expansion of productive forces”.

Moving away from this linear history, always pursuing the same aim, is seen as “moving backwards” and somehow, we would prefer to continue this narrative than preserve life on Earth. A narrative where science and science fiction combine, like Elon Musk announcing a future colony on Mars. Here, cognitive bias, known as the “Othello effect”, is at play.

Another explanation relates to capitalization itself, which represents a means of power for organizations. The greater the capitalization, the more the networks controlled by the organization will grow – and the greater the persuasive power. Elon Musk (yes, him again) aims to control the entire fleet of personal vehicles, with his robotaxis and self-driving cars. And what is true of companies is also true of governments, as highlighted by François Fourquet in his work “Les Comptes de la puissance” (The Accounts of Power).

While dominant ideas of socialism in the 20th century have always been fascinated by the collective power created by capitalism, trying to make it benefit as many as possible, environmentalism, on the other hand, supports decentralized initiatives and short circuits.

This trend often breaks with the “politics of power”, which explains in particular why conservatives are so opposed to it. Is it “realistic”, in a world where countries try to dominate each other? But on the other hand, can the leadership race last indefinitely if it undermines life on Earth?

Fabrice Flipo, Professor of social and political philosophy, epistemology and history of science and technology at Institut Mines-Télécom Business School

This article was republished from The Conversation under the Creative Commons license. Read the original article here (in French).

David Gesbert, winner of the 2021 IMT-Académie des Sciences Grand Prix

EURECOM researcher David Gesbert is one of the pioneers of Multiple-Input Multiple-Output (MIMO) technology, used nowadays in many wireless communication systems. He contributed to the boom in WiFi, 3G, 4G and 5G technology, and is now exploring what could be the 6G of the future. In recognition of his body of work, Gesbert has received the IMT-Académie des Sciences Grand Prix.

I’ve always been interested by research in the field of telecommunications. I was fascinated by the fact that mathematical models could be converted into algorithms used to make everyday objects work,” declares David Gesbert, researcher and specialist in wireless telecommunications systems at EURECOM. Since he completed his studies in 1997, Gesbert has been working on MIMO, a telecommunications system that was created in the 1990s. This technology makes it possible to transfer data streams at high speeds, using multiple transmitters and receivers (such as telephones) in conjunction. Instead of using a single channel to send information, a transmitter can use multiple spatial streams at the same time. Data is therefore transferred more quickly to the receiver. This spatialized system represents a breaking point with previous modes of telecommunication, like the Global System for Mobile Communications (GSM).

It has proven to be an important innovation, as MIMO is now broadly used in WiFi systems and several generations of mobile telephone networks, such as 4G and 5G. After receiving his PhD from École Nationale Supérieure des Télécommunications in 1997, Gesbert completed two years of postdoctoral research at Stanford University. He joined the telecommunications laboratory directed by Professor Emeritus Arogyaswami Paulraj, an engineer who worked on the creation of MIMO. In the early 2000s, the two scientists, accompanied by two students, launched the start-up Iospan Wireless. This was where they developed the first high-speed wireless modem using MIMO-OFDM technology.

OFDM: Orthogonal Frequency-Division Multiplexing

OFDM is a process that improves communication quality by dividing a high-debit data stream into many low-debit data streams. By combining this mechanism with MIMO, it is possible to transfer data at high speeds while making the information generated by MIMO more robust against radio distortion. “These features make it great for use in deploying telecommunications systems like 4G or 5G,” adds the researcher.  

In 2001, Gesbert moved to Norway, where he taught for two years as adjunct professor in the IT department at the University of Oslo. One year later, he published an article in which he described that complex propagation environments favor the functioning of MIMO. “This means that the more obstacles there are in a place, the more the waves generated by the antennas are reflected. The waves therefore travel different paths and interference is reduced, which leads to more efficient data transfer. In this way, an urban environment in which there are many buildings, cars, and other objects will be more favorable to MIMO than a deserted area,” explains the telecommunications expert.  

In 2003, he joined EURECOM, where he became a professor and five years later, head of the Mobile Communications department. There, he has continued his work aiming to improve MIMO. His research has shown him that base stations — also known as relay antennas — could be useful to improve the performance of this mechanism. By using antennas from multiple relay stations far apart from each other, it would be possible to make them work together and produce a giant MIMO system. This would help to eliminate interference problems and optimize the circulation of data streams. Research is still being performed at present to make this mechanism usable.

MIMO and robots

In 2015, Gesbert obtained an ERC Advanced Grant for his PERFUME project. The initiative, which takes its name from high PERfomance FUture Mobile nEtworking, is based on the observation that “the number of receivers used by humans and machines is currently rising. Over the next few years, these receivers will be increasingly connected to the network,” emphasizes the researcher. The aim of PERFUME is to exploit the information resources of receivers so that they work in cooperation, to improve their performance. The MIMO principle is at the heart of this project: spatializing information and using multiple channels to transmit data. To achieve this objective, Gesbert and his team developed base stations attached to drones. These prototypes use artificial intelligence systems to communicate between one another, in order to determine which bandwidth to use or where to place themselves to give a user optimal network access. Relay drones can also be used to extend radio range. This could be useful, for example, if someone is lost on a mountain, far from relay antennas, or in areas where a natural disaster has occurred and the network infrastructure has been destroyed.

As part of this project, the EURECOM professor and his team have performed research into decision-making algorithms. This has led them to develop artificial neuron networks to improve decision-making processes performed by the receivers or base stations desired to cooperate together. With these neuron networks, the devices are capable of quantifying and exploiting the information held by each of themAccording to Gesbert, “this will allow receivers or stations with more information to correct flaws in receivers with less. This idea is a key takeaway from the PERFUME project, which finished at the end of 2020. It indicates that to cooperate, agents like radio receivers or relay stations make decisions based on sound data, which sometimes has to be rejected to let themselves be guided by decisions from agents with access to better information than them. It is a surprising result, and a little counterintuitive.”

Towards the 6th generation of mobile telecommunications technology

“Nowadays, two major areas are being studied concerning the development of 6G,” announces Gesbert. The first relates to ways of making networks more energy efficient by reducing the number of times that transmissions take place, by restricting the amount of radio waves emitted and reducing interference. One solution to achieve these objectives is to use artificial intelligence. “This would make it possible to optimize resource allocation and use radio waves in the best way possible,” adds the expert.

The second concerns applications of radio waves for purposes other than communicating information. One possible use for the waves would be to produce images. Given that when a wave is transmitted, it reflects off a large number of obstacles, artificial intelligence could analyze its trajectory to identify the position of obstacles and establish a map of the receiver’s physical environment. This could, for example, help self-driving cars determine their environment in a more detailed way. With 5G, the target precision for locating a position is around a meter, but 6G could make it possible to establish centimeter-level precision, which is why these radio imaging techniques could be useful. While this 6th-generation mobile telecommunications network will have to tackle new challenges, such as the energy economy and high-accuracy positioning, it seems clear that communication spatialization and MIMO will continue to play a fundamental role.

Rémy Fauvel

réseaux optiques, optical networks

The virtualization of optical networks to support… 5G

Mobile networks are not entirely wireless. They also rely on a network of optical fibers, which connect antennas to the core network, among other things. With the arrival of 5G, optical networks must be able to keep up with the ramping up of the rest of the mobile network to ensure the promised quality of service. Two IMT Atlantique researchers are working on this issue, by making optical networks smarter and more flexible.  

In discussions of issues surrounding 5G, it is common to hear about the installation of a large number of antennas or the need for compatible devices. But we often overlook a crucial aspect of mobile networks: the fiber optic infrastructure on which they rely. Like previous generations, 5G relies on a wired connection in most cases. This technology is also used in the “last mile”. It therefore makes it possible to connect antennas to core network equipment, which is linked to most of the connected machines around the world. It can also connect various devices within the same antenna site.

In reality, 5G is even more dependent on this infrastructure than previous generations since the next-generation technology comes with new requirements related to new uses, such as the Internet of Things (IoT). For example, an application such as an autonomous car requires high availability, perfect reliability, very-low latency etc. All of these constraints weigh on the overall architecture, which includes fiber optics. If they cannot adapt to new demands within the last mile, the promises of 5G will be jeopardized. And new services (industry 4.0, connected cities, telesurgery etc.) will simply not be able to be provided in a reliable, secure way.

Facilitating network management through better interoperability

Today, optical networks are usually over-provizioned in relation to current average throughput needs. They are designed to be able to absorb 4G peak loads and are neither optimized, nor able to adapt intelligently to fluctuating demand. The new reality created by 5G, therefore represents both a threat for infrastructure in terms of its ability to respond to new challenges, and an opportunity to rethink its management.

Isabel Amigo and Luiz Anet Neto, telecommunications researchers at IMT Atlantique, are working with a team of researchers and PhD students to conduct research in this area. Their goal is to make optical networks smarter, more flexible and more independent from the proprietary systems imposed by vendors. A growing number of operators are moving in this direction. “At Orange, it used to be common to meet specialists in configuration syntaxes and equipment management for just one or two vendors,” explains Luiz Anet Neto, who worked for the French group for five years. “Now, teams are starting to set up a “translation layer” that turns the various configurations, which are specific to each vendor, into a common language that is more straightforward and abstract.”

This “translation layer”, on which he is working with other researchers, is called SDN, which stands for Software-Defined Networking. This model is already used in the wireless part of the network and involves offloading certain functions of network equipment. Traditionally, this equipment fulfills many missions: data processing (receiving and sending packets back to their destination), as well as a number of control tasks (routing protocols, transmission interfaces etc.) With SDN, equipment is relieved from these control tasks, which are centralized within an “orchestrator” entity that can control several devices at once.  

Read more on I’MTech: What is SDN?

There are many benefits to this approach. It provides an overview of the network, making it easier to manage, while making it possible to control all of the equipment, regardless of its vendor without having to know any proprietary language. “To understand the benefit of SDN, we can use an analogy between a personal computer and the SDN paradigm,” says Isabel Amigo. “Today, it would be unthinkable to have a computer that would only run applications that use a specific language. So, machines have an additional layer – the operating system – that is in charge of “translating” the various languages, as well as managing resources, memory, disks etc. SDN therefore aims to act like an operating system, but for the network.” Similarly, the goal is to be able to install applications that are able to work on any equipment, regardless of the hardware vendor. These applications could, for example, distribute the load based on demand.

Breaking our dependence on hardware vendors

SDN often goes hand in hand with another concept, inspired by virtualization in data centers: NFV (Network Functions Virtualization). Its principle: being able to execute any network functionality (not just control functions) on generic servers via software applications.”Usually, dedicated equipment is required for these functions,” says the IMT researcher. “For example, if you want to have a firewall, you need to buy a specific device from a vendor. With NFV, this is no longer necessary: you can implement the function on any server via an application.”

Read more on I’MTech: What is NFV?

As with SDN, the arrival of virtualization in optical networks promotes better interoperability. This makes it harder for vendors to require the use of their proprietary systems linked to their equipment. The market is also changing, by making more room for software developers. “But there is still a long way to go,” says Luiz Anet Neto. “Software providers can also try to make their customers dependent on their products, through closed systems. So operators have to remain vigilant and offer an increasing level of interoperability.”

Operators are working with the academic world precisely for this purpose. They would fully benefit from standardization, which would simplify the management of their optical networks. Laboratory tests carried out by IMT Atlantique in partnership with Orange provide them with technical information and areas to explore ahead of discussions with vendors and standardization bodies.

Sights are already set on 6G

For the research teams, there are many areas for development. First of all, the scientists are seeking to further demonstrate the value of their research, through testing focusing on a specific 5G service (up to now, the experiments have not applied to a specific application). Their aim is to establish recommendations for optical link dimensioning to connect mobile network equipment.

The goal is then to move towards smart optimization of optical networks. To provide an example of how findings by IMT Atlantique researchers may be applied, it is currently possible to add a “probe” that can determine if a path is overloaded and shift certain services to another link if necessary. The idea would then be to develop more in-depth mathematical modeling of the phenomena encountered, in order to automate incident resolution using artificial intelligence algorithms.

And it is already time for researchers to look toward the future of technology. “Mobile networks are upgraded at a dizzying pace; new generations come out every ten years,” says Luiz Anet Neto. “So we already have to be thinking about how to meet future requirements for 6G!

Bastien Contreras

IoT, Internet of Things

A standardized protocol to respond to the challenges of the IoT

The arrival of 5G has put the Internet of Things back in the spotlight, with the promise of an influx of connected objects in both the professional and private spheres. However, before witnessing the projected revolution, several obstacles remain. This is precisely what researchers at IMT Atlantique are working on, and they have already achieved results of global significance.

The Internet of Things (IoT) refers to the interconnection of various physical devices via the Internet for the purpose of sharing data. Sometimes referred to as the “Web 3.0”, this field is set to develop rapidly in the coming years, thanks to the arrival of new networks, such as 5G, and the proliferation of connected objects. Its applications are infinite: monitoring of health data, the connected home, autonomous cars, real-time and predictive maintenance on industrial devices, and more.

Although it is booming, the IoT still faces major challenges. “We need to respond to three main constraints: energy efficiency, interoperability and security,” explains Laurent Toutain, a researcher at IMT Atlantic. But there is one problem: these three aspects can be difficult to combine.

The three pillars of the IoT

First, energy is a key issue for the IoT. For most connected objects, the autonomy of a smartphone is not sufficient. In the future, a household may have several dozen such devices. If they each need to be recharged every two or three days, the user will have to devote several hours to this task. And what about factories that could be equipped with thousands of connected objects? In some cases, these are only of value if they have a long battery life. For example, a sensor could be used to monitor the presence of a fire extinguisher at its location and send an alert if it does not detect one. If you have to recharge its battery regularly, such an installation is no longer useful.

For a connected object, communication features account for the largest share of energy consumption. Thus, the development of IoT has been made possible by the implementation of networks, such as LoRa or Sigfox, allowing data to be sent while consuming little energy.

The second issue is interoperability, i.e. the ability of a product to work with other objects and systems, both current and future. Today, many manufacturers still rely on proprietary universes, which necessarily limits the functionalities offered by the IoT. Take the example of a user who has bought connected light bulbs from two different brands. They will not be able to control them via a single application.

Finally, the notion of security remains paramount within any connected system. This observation is all the more valid in the IoT, especially with applications involving the exchange of sensitive data, such as in the health sector. There are indeed many risks. An ill-intentioned user could intercept data during transmission, or send false information to connected objects, thus inducing wrong instructions, with potentially disastrous consequences.

Read more on I’MTech: The IoT needs dedicated security – now

On the Internet, methods are already in place to limit these threats. The most common is end-to-end data encryption. Its purpose is to make information unreadable while it is being transported, since the content can only be deciphered by the sender and receiver of the message.

Three contradictory requirements?

Unfortunately, each of the three characteristics can influence the others. For example, by multiplying the number of possible interlocutors, interoperability raises more security issues. But it also affects energy consumption. “Today, the Internet is a model of interoperability,” explains Laurent Toutain. For this, it is necessary to send a large amount of information each time, with a high degree of redundancy. It offers remarkable flexibility, but it also takes up a lot of space.” This is only a minor disadvantage for a broadband network, but not for the IoT, which is constrained in its energy consumption.

Similarly, if you want to have a secure system, there are two main possibilities. The first is to close it off from the rest of the ecosystem, in order to reduce risks, which radically limits interoperability.

The second is to implement security measures, such as end-to-end encryption, which results in more data being sent, and therefore increased energy consumption.

Reducing the amount of data sent, without compromising security

For about seven years, Laurent Toutain and his teams have been working to reconcile these different constraints, in the context of the IoT. “The idea is to build on what makes the current Internet so successful and adapt it to the constrained environments, says the researcher. We are therefore taking up the principles of the encryption methods and protocols used today, such as HTTP, but taking into account the specific requirements of the IoT”.

The research team has developed a compression mechanism named SCHC (Static Context Header Compression, pronounced “chic”). It aims to improve the efficiency of encryption solutions and provide interoperability in low-power networks.

For this purpose, SCHC works on the headers of the usual Internet protocols (IP, UDP and CoAP), which contain various details: source address, destination address, location of the data to be read, etc. The particularity of this method is that it takes advantage of the specificity of the IoT: a simple connected object, such as a sensor, has far fewer functions than a smartphone. It is then possible to anticipate the type of data sent. “We can thus free ourselves from the redundancy of classic exchanges on the web, says Laurent Toutain. We then lose flexibility, which could be inconvenient for standard Internet use, but not for a sensor, which is limited in its applications”.

In this way, the team at IMT Atlantique has achieved significant results. It has managed to reduce the size of the headers traditionally sent, weighing 70-80 bytes, to only 2 bytes, and to 10 bytes in their encrypted version. “A quantity that is perfectly acceptable for a connected object and compatible with network architectures that consume very little energy,” concludes the researcher.

A protocol approved by the IETF

But what about that precious interoperability? With this objective, the authors of the study approached the IETF (Internet Engineering Task Force), the international organization for Internet standards. The collaboration has paid off, as SCHC has been approved by the organization and now serves as the global standard for compression. This recognition is essential, but is only a first step towards effective interoperability. How can we now make sure that manufacturers really integrate the protocol into their connected objects? For this, Laurent Toutain has partnered with Alexander Pelov, also a researcher at IMT Atlantic, in order to found the start-up company Acklio. The company works directly with industrialists and offers them solutions to integrate SCHC in their products. It thus intends to accelerate the democratization of the protocol, an effort supported in particular by  €2 million in funds raised at the end of 2019.

Read more on I’MTech Acklio: linking connected objects to the Internet

Nevertheless, industrialists remain to be convinced that the use of a standard is also in their interest. To this end, Acklio also aims to position SCHC among the protocols used within 5G. To achieve this, it will have to prove itself with the 3GPP (3rd Generation Partnership Project) which brings together the world’s leading telecommunications standards bodies. “A much more constraining process than that of the IETF,” however, warns Laurent Toutain.

Bastien Contreras

David Gesbert, PERFUME

PERFUME: a scent of cooperation for the networks of the future

The ERC PERFUME project, led by EURECOM researcher David Gesbert and ending in 2020, resulted in the development of algorithms for local decision making in the mobile network. This research was tested on autonomous drones, and is particularly relevant to the need for connected robotics in the post-5G world.

Now that 5G is here, who’s thinking about what comes next? The team working with David Gesbert, a researcher specializing in wireless communication systems at EURECOM, has just completed its ERC PERFUME project on this subject. So what will wireless networks look like by 2030? While 5G is based on the centralization of calculations in the cloud, the networks of the future will require, on the contrary, a distributed network. By this, we mean the emergence of a more cooperative network. “In the future, the widespread use of robotic objects and devices to perform autonomous tasks will increase the need for local decision making, which is difficult in a centralized system,” says Gesbert. Nevertheless, the objective remains the same: optimizing the quality of the network. This is especially important since the increase in connected devices may cause more interference and therefore affect the quality of the information exchanged.

Why decentralize decision making on the network?

Under 5G, every device that is connected to the network can send measurements to the cloud. The cloud has a very high computing capacity, enabling it to process an immeasurable amount of data, before sending instructions back to devices (a tablet, cell phone, drone, etc.). However, these information transfers take time, which is a very valuable commodity for connected robotics applications or critical missions. Autonomous vehicles, for example, must make instant decisions in critical situations. “In the context of real-time applications, the response speed of the network must be optimized. Decentralizing decisions closer to the base stations is precisely the solution that was studied in our PERFUME project,” explains David Gesbert. As 5G is not yet equipped to meet this constraint, we have to introduce new evolutions of the standard.

EURECOM’s researchers are thus relying on cooperation and coordination of the computing capabilities of local terminals such as our cell phones. By exchanging information, these terminals could coordinate in the choice of their power and transmission frequency, which would limit the interference that would limit the flow rates, for example. They would no longer focus solely on their local operations, but would participate in the overall improvement of the quality of the network. A team effort that would manifest itself at the user level by sending files faster or providing better image quality during a video call. However, although possible, this collaboration remains difficult to implement.

Towards more cooperative wireless networks

Distributed networks pose a major problem: access to information from one device to another is incomplete. “Our problem of exchanging information locally can be compared to a soccer team playing blindfolded. Each player only has access to a noisy piece of information and doesn’t know where the other team members are in their attempt to score the goal together”, says David Gesbert. Researchers then develop so-called robust decision-making algorithms. Their objective? To allow a set of connected devices to process this noisy information locally. “Networks have become too complicated to be optimized by conventional mathematical solutions, and they are teeming with data. This is why we have designed algorithms based on signal processing but also on machine learning,” continues the researcher.

These tools were then tested in a concrete 5G network context in partnership with Ericsson. “The objective was for 5G cells to coordinate on the choice of directional beams of MIMO (multi-input multi-output) antennas to reduce interference between them,” says the researcher. These smart antennas, deployed as part of 5G, are increasingly being installed on connected devices. They perform “beamforming”, which means that they direct a radio signal in a specific direction – rather than in all directions – thus improving the efficiency of the signal. These promising results have opened the door to large-scale tests on connected robotics applications, the other major focus of the ERC project. EURECOM has thus experimented with certain algorithms on autonomous drones.

Drones at the service of the network?

Following a disaster such as an avalanche, a tsunami or an earthquake, part of the ground network infrastructure may be destroyed and access to the network may be cut off. It would then be possible to replicate a temporary network architecture on site using a fleet of drones serving as air relays. On the EURECOM campus, David Gesbert’s team has developed prototypes of autonomous drones connected to 5G. These determine a strategic flight position and their respective positions in order to solve network access problems for users on the ground. The drones then move freely and recalculate their optimal placement according to the user’s position.  This research notably received the prize for the best 2019 research project, awarded by the Provence-Alpes-Côte d’Azur region’s Secure Communicating Solutions cluster.

This solution could be considered in the context of rescue missions and geolocalization of missing persons. However, several challenges need to be addressed for this method to develop. Indeed, current regulations prohibit the theft of autonomous aircraft. In addition, they have a flight time of about 30 minutes, which is still too short to offer sustainable solutions.

This research is also adapted to issues relating to autonomous cars, adds David Gesbert. For example, when two vehicles arrive at an intersection, a protocol for coordination must be put in place to ensure that the vehicles cross the intersection as quickly as possible and with the lowest likelihood of collision.” In addition, medicine and connected factories would also be targets for application of distributed networks. As for the integration of this type of standard in the future 6G, it will depend on the interests of industrial players in the years to come.

By Anaïs Culot

Learn more about the ERC PERFUME project