métavers, metaverse

What is the metaverse?

Although it is only in the prototype stage, the metaverse is already making quite a name for itself. This term, which comes straight out of a science fiction novel from the 1990s, now describes the concept of a connected virtual world, heralded as the future of the Internet. So what’s hiding on the other side of the metaverse? Guillaume Moreau, a Virtual Reality researcher at IMT Atlantique, explains.

How can we define the metaverse?

Guillaume Moreau: The metaverse offers an immersive and interactive experience in a virtual and connected world. Immersion is achieved through the use of technical devices, mainly Virtual Reality headsets, which allow you to feel present in an artificial world. This world can be imaginary, or a more or less faithful copy of reality, depending on whether we’re talking about an adventure video game or the reproduction of a museum, for example. The other key aspect is interaction. The user is a participant, so when they do something, the world around them immediately reacts.

The metaverse is not a revolution, but a democratization of Virtual Reality. Its novelty lies in the commitment of stakeholders like Meta, aka Facebook – a major investor in the concept – to turn experiences that were previously solitary or for small groups only into, massive, multi-user experiences – in other words, to simultaneously interconnect a large number of people in three-dimensional virtual worlds, and to monetize the whole concept. This raises questions of IT infrastructure, uses, ethics, and health.

What are its intended uses?

GM: Meta wants to move all internet services into the metaverse. This is not realistic, because there will be, for example, no point in buying a train ticket in a virtual world. On the other hand, I think there will be not one, but many metaverses, depending on different uses.

One potential use is video games, which are already massively multi-user, but also virtual tourism, concerts, sports events, and e-commerce. A professional use allowing face-to-face meetings is also being considered. What the metaverse will bring to these experiences remains an open question, and there are sure to be many failures out of thousands of attempts. I am sure that we will see the emergence of meaningful uses that we have not yet thought of.

In any case, the metaverse will raise challenges of interoperability, i.e. the possibility of moving seamlessly from one universe to another. This will require the establishment of standards that do not yet exist and that should, as is often the case, be enforced by the largest players on the market.

What technological advances have made the development of these metaverses possible today?

GM: There have been notable material advances in graphics cards that offer significant display capabilities, and Virtual Reality headsets have reached a resolution equivalent to the limits of human eyesight. Combining these two technologies results in a wonderful contradiction.

On the one hand, the headsets work on a compromise; they must offer the largest possible field of view whilst still remaining light, small and energy self-sufficient. On the other hand, graphics cards are heat sinks. Therefore, in order to ensure the battery life of the headsets, the calculations behind the metaverse display have to be done on remote server farms before the images can be transferred. That’s where the 5G networks come in, whose potential for new applications, like the metaverse, is yet to be explored.

Could the metaverse support the development of new technologies that would increase immersion and interactivity?

GM: One way to increase the action of the user is to set them in motion. There is an interesting research topic on the development of multidirectional treadmills. This is a much more complicated problem than it seems, and it only takes the horizontal plane into account – so no slopes, steps, etc.

Otherwise, immersion is mainly achieved through sensory integration, i.e. our ability to feel all our senses at the same time and to detect inconsistencies. Currently, immersion systems only stimulate sight and hearing, but another sense that would be of interest in the metaverse is touch.

However, there are a number of challenges associated with so-called ‘haptic’ devices. Firstly, complex computer calculations must be performed to detect a user’s actions to the nearest millisecond, so that they can be felt without the feedback seeming strange and delayed. Secondly, there are technological challenges. The fantasy of an exoskeleton that responds strongly, quickly, and safely in a virtual world will never work. Beyond a certain level of power, robots must be kept in cages for safety reasons. Furthermore, we currently only know how to do force feedback on one point of the body – not yet on the whole thing.

Does that mean it is not possible to stimulate senses other than sight and hearing?

GM: Ultra-realism is not inevitable; it is possible to cheat and trick the brain by using sensory substitution, i.e. by mixing a little haptics with visual effects. By modifying the visual stimulus, it is possible to make haptic stimuli appear more diverse than they actually are. There is a lot of research to be done on this subject. As far as the other senses are concerned, we don’t know how to do very much. This is not a major problem for a typical audience, but it calls into question the accessibility of virtual worlds for people with disabilities.

One of the questions raised by the metaverse is its health impact. What effects might it have on our health?

GM: We know already that the effects of screens on our health are not insignificant. In 2021, the French National Agency for Food, Environmental and Occupational Health & Safety (ANSES) published a report specifically targeting the health impact of Virtual Reality, which is a crucial part of the metaverse. The prevalence of visual disorders and the risk of Virtual Reality Sickness – a simulation sickness that affects many people – will therefore be sure consequences of exposure to the metaverse.

We also know that virtual worlds can be used to influence people’s behavior. Currently, this has a positive goal and is being used for therapeutic purposes, including the treatment of certain phobias. However, it would be utopian to think that the opposite is not possible. For ethical and logical reasons, we cannot conduct research aiming to demonstrate that the technology can be used to cause harm. It will therefore be the uses that dictate the potentially harmful psychological impact of the metaverse.

Will the metaverses be used to capture more user data?

GM: Yes, that much is obvious. The owners and operators of the metaverse will be able to retrieve information on the direction of your gaze in the headset, or on the distance you have traveled, for example. It is difficult to say how this data will be used at the moment. However, the metaverse is going to make its use more widespread. Currently, each website has data on us, but this information is not linked together. In the metaverse, all this data will be grouped together to form even richer user profiles. This is the other side of the coin, i.e. the exploitation and monetization side. Moreover, given that the business model of an application like Facebook is based on the sale of targeted advertising, the virtual environment that the company wants to develop will certainly feed into a new advertising revolution.

What is missing to make the metaverse a reality?

GM: Technically, all the ingredients are there except perhaps the equipment for individuals. A Virtual Reality headset costs between €300 and €600 – an investment that is not accessible to everyone. There is, however, a plateau in technical improvement that could lower prices. In any case, this is a crucial element in the viability of the metaverse, which, let us not forget, is supposed to be a massively multi-user experience.

Anaïs Culot

bio-inspiration

What is bio-inspiration?

The idea of using nature as inspiration to create different types of technology has always existed, but it has been formalized through a more systematic approach since the 1990s. Frédéric Boyer, a researcher at IMT Atlantique, explains how bio-inspiration can be a source of new ideas for developing technologies and concepts, especially for robotics.

How long has bio-inspiration been around?

Frédéric Boyer: There have always been exchanges between nature and fundamental and engineering sciences. For instance, Alessandro Volta used electric fish such as electric rays as inspiration to develop the first batteries. But it’s an approach that has become more systematic and intentional since the 1990s.

How does bio-inspiration contribute to the development of new technologies?

FB: In the field of robotics, the dream has always been to make an autonomous robot, that can interact appropriately with an unfamiliar environment, without putting itself or those around it in danger. In robotics we don’t really talk about intelligence – what we’re interested in is autonomy, and that’s still a long way off.  

There’s a real paradigm shift underway. For a long time, intelligence was considered to be  computing power. Using measurements made by their various sensors, robots had to reconstruct their complex environment in a symbolic way and make decisions based on this information. Through this approach, we built machines that were extremely complex but with little autonomy or ability to adapt to different environments. Through the bio-inspiration movement, in particular, intelligence has also come to be defined in terms of the autonomy it brings to a system.  

What is the principle behind a bio-inspired robot?

FB: Bio-inspired robots are not based on the perception and complex representation of their environment. They are simply sensors and local loops that enable them to move in different environments. This type of intelligence comes from observing animals’ bodies: it’s what we call embodied intelligence. This intelligence, encoded in the body and morphology of living organisms has been developed over the course of evolution: an animal with a very simple nervous system can interact very effectively with its environment. We practice embodied intelligence every day: with very low  levels of cognition we solve complex problems related to the autonomy of our body.

To illustrate the difference between the paradigms we can take the example of a robot that creeps along like a snake. There are two approaches for piloting this system. The first is to place a supercomputer inside the robot, that will send a signal to each of the vertebrae and motors to drive the joints. With the second approach, there is no centralized computer. The “head” sends an  impulsion to the first vertebrae which spreads to the next one, followed by the one after that and is automatically synchronized through feedback phenomena between the vertebrae and with the environment, through sensors.

Read more on l’IMTech: Intelligence incarnated: a bio-inspired approach in the field of robotics

What does a bio-inspired approach to developing new technology involve?

FB: It’s an approach made up of three stages. First, you have to closely observe how living beings function and choose a good candidate solution for a given problem. Then, you have to extract relevant phenomena in the functioning of the original natural system and understand them using the laws of physics and mathematical models. This is the most complex stage because we have to do a lot of sorting. Nature is extremely redundant, which allows it to adapt to changes in the environment, but in our case, only certain phenomena are sought. So, we have to sort through the information and order it using mathematical models to understand how animals solve problems. The last stage is using these models to develop new technologies. That’s also why we talk about bio-inspiration and not bio-mimicry: the goal is not to reproduce living systems, but to use them as inspiration based on functional models.

What are some examples of bio-inspired technologies?

FB : We’re working on the electrical sense, inspired by fish that are referred to as electric fish: these animals’ electro sensitive skin helps them perceive their environment (objects, distance, other fish etc.) and navigate over long distances using maps that are still little understood. We are able to imitate this sixth sense using electrodes placed on the surface of our robots that record “echoes” from a field, emitted and reflected by the environment.

Beyond that, one of the best-known examples are the crawling robots developed by the Massachusetts Institute of Technology (MIT). These robots are inspired by geckos. They can stick to surfaces through a multitude of microscopic adhesive forces, like those generated by the tiny hairs on geckos’ feet. The bio-inspired approach can be extended to these nanometer scales!

And insects’ vision and the way they flap their wings to fly, or how snakes creep along are other examples of sources of inspiration for developing robotic technologies.  

Are there other kinds of intelligence that are inspired by animals?

FB : Collective intelligence is a good example. By studying ants or bees, it is possible to make swarms of drones that can perform complex cognitive tasks without a high level of on-board intelligence. For animals that are organized in swarms, each unit has very little intelligence, but the sum of their interactions, with one another and with the environment, results in a collective intelligence. It’s also a source of study for the development of new robotic technologies.

What fields does bio-inspiration apply to?

FB: In addition to robotics, bio-inspiration provides a source of innovation for a wide range of fields. Whether in applied sciences, like aeronautics and architecture, or areas of basic research like mathematics, physics and computational sciences.

What does the future of bio-inspiration hold?

FB : We’re going to have to reinvent and produce new technologies that are not harmful to the environment. There is a philosophical revolution and paradigm shift to be achieved in terms of the relationship between man and other living things. It would be a mistake to believe that we can replace living beings with robots.

Bio-inspiration teaches us a form of wisdom and humility, because we still have a lot of work ahead of us before we can build a drone that can fly like an insect in an autonomous way, in terms of energy and decision-making. Nature is a never-ending source of inspiration, and of wonder.

By Antonin Counillon

Sobriété numérique, digital sobriety

What is digital sufficiency?

Digital consumption doubles every 5 years. This is due in particular to the growing number of digital devices and their increased use. This consumption also has an increasing impact on the environment. Digital sufficiency refers to finding the right balance for the use of digital technology in relation to the planet and its inhabitants. Fabrice Flipo, a researcher at Institut Mines-Télécom Business School and the author of the book “L’impératif de la sobriété numérique” (The Imperative of Digital Sufficiency) explains the issues relating to this sufficiency.

What observation is the concept of digital sufficiency based on?

Fabrice Flipo: On the observation of our increasing consumption of digital technology and its impacts on the environment, especially in terms of greenhouse gases. This impact is due to the growing use of digital tools and their manufacturing. Materials for digital tools depend on their extraction, which relies primarily on fossil fuels, and therefore carbon. The use of these tools is also increasingly energy-intensive.

The goal is to include digital technology in discussions currently underway in other sectors, such as energy or transportation. Until recently, digital technology has been left out of these debates. This is the end of the digital exception.

How can we calculate the environmental impacts of digital technology?

FF: The government’s roadmap for digital technology primarily addresses the manufacturing of digital tools, which it indicates accounts for 75% of its impacts. According to this roadmap, the solution is to extend the lifespan of digital tools and combat planned obsolescence. But that’s not enough, especially since digital devices have proliferated in all infrastructure and their use is increasingly costly in energy. The amount of data consumed doubles every 5 years or so and the carbon footprint of the industry has doubled in 15 years.  

It’s hard to compare figures about digital technology because they don’t all measure the same thing. For example, what should we count in order to measure internet consumption? The number of devices, the number of individual uses, the type of uses? So standardization work is needed.

A device such as a smartphone is used for many purposes. Consumption estimations are averages based on typical use scenarios. Another standardization issue is making indicators understandable for everyone. For example, what measurements should be taken into account to evaluate environmental impact?

What are the main energy-intensive uses of digital technology?

FF: Today, video is one of the uses that consumes the most energy. What matters is the size of the files and their being transmitted in computers and networks. Every time they are transmitted, energy is consumed. Video, especially high-resolution video, commands pixels to be switched on up to 60 times per second. The size of the files makes their transmission and processing very energy-intensive. This is the case for artificial intelligent programs that process images and video as well. Autonomous vehicles are also likely to use a lot of energy in the future, since they involve huge amounts of information. 

What are the mechanisms underlying the growth of digital technology?

FF: Big companies are investing heavily in this area. They use traditional marketing strategies: target an audience that is particularly receptive to arguments and able to pay, then gradually expand this audience and find new market opportunities. The widespread use of a device and a practice leads to a gradual phasing out of alternative physical methods. When digital technology starts to take hold in a certain area, it often ends up becoming a necessary part of our everyday lives, and is then hard to avoid. This is referred to as the “lock-in” effect. A device is first considered to be of little use, but then becomes indispensable. For example, the adoption of smartphones was largely facilitated by offers funded by charging other users, through the sale of SMS messages. This helped lower the market entry cost for the earliest adopters of smartphones and create economies of scale. Smartphones then became widespread. Now, it is hard to do without one.

How can we apply digital sufficiency to our lifestyles?

FF: Sufficiency is not simply a matter of “small acts”, but it cannot be enforced by a decree either. The idea is to bring social mindedness to our lifestyles, to regain power over the way we live. The balance of power is highly asymmetrical: on one side are the current or potential users who are scattered, and on the other are salespeople who tout only the advantages of their products and have extensive resources for research and for attracting customers. This skewed balance of power must be shifted. An important aspect is informing consumers’ choices. When we use digital devices today, we have no idea about how much energy we’re consuming or our environmental impact: we simply click. The aim is to make this information perceptible at every level, and to make it a public issue, something everyone’s concerned about. Collective intelligence must be called upon to change our lifestyles and reduce our use of digital technology, with help from laws if necessary.

For example, we could require manufacturers to obtain marketing authorization, as is required for medications. Before marketing a product or service (a new smartphone or 5G), the manufacturer or operator would have to provide figures for the social-ecological trajectory they seek to produce, through their investment strategy. This information would be widely disseminated and would allow consumers to understand what they are signing up for, collectively, when they choose 5G or a smartphone. That is what it means to be socially-minded: to realize that the isolated act of purchasing actually forms a system.

Today, this kind of analysis is carried out by certain associations or non-governmental organizations. For example, this is what The Shift Project does for free. The goal is therefore to transfer this responsibility and its cost to economic players who have far greater resources to put these kinds of analyses in place. Files including these analyses would then be submitted to impartial public organizations, who would decide whether or not a product or service may be marketed. The organizations that currently make such decisions are not impartial since they base their decisions on economic criteria and are stakeholders in the market that is seeking to expand.  

How can sufficiency be extended to a globalized digital market?  

FF: It works through a leverage effect: when a new regulation is established in one country, it helps give more weight to collectives that are dealing with the same topic in other countries. For example, when the electronic waste regulation was introduced, many institutions protested. But gradually, an increasing number of  countries have adopted this regulation.

Some argue that individual efforts suffice to improve the situation, while others think that the entire system must be changed through regulations. We must get away from such either-or reasoning and go beyond  opposing viewpoints in order to combine them. The two approaches are not exclusive and must be pursued simultaneously.

By Antonin Counillon

Antenne 5G

What is beamforming?

Beamforming is a telecommunications technology that enables the targeted delivery of larger and faster signals. The development of 5G relies in particular on beamforming. Florian Kaltenberger, researcher at EURECOM and 5G specialist, explains how this technology works.

What is beamforming?

Florian Kaltenberger: Beamforming consists of transmitting synchronized waves in the form of beams, from an antenna. This makes it possible to target a precise area, unlike conventional transmission systems that emit waves in all directions. This is not a new technology, it has been used for a long time in satellite communication and for radar. But it is entering mobile telecommunications for the first time with 5G.

Why is beamforming used in 5G?

FK: The principle of 5G is to direct the wave beams directly to the users. This allows a limited interference between the waves, having a more reliable signal, and saving energy. These three conditions are some of the demands that 5G must meet. Because the waves of 5G signals have high frequencies, they can carry more information, and do so faster. This system avoids congestion in hotspots, i.e. there will be no throughput problems in places where there are many connections simultaneously. Also, the network can be more locally diverse: there can be completely different services used on the same network at the same time.

How does network coverage work with this system?

FK: Numerous antennas are needed. There are several reasons for this. The size of the antennas is proportional to the length of the waves they generate. As the wavelength of 5G signals is smaller, so is the size of the antennas: they are only a few centimeters long. But the energy that the antennas are able to emit is also proportional to their size: a 5G antenna alone could only generate a signal with a range of about ten meters. In order to increase the range, multiple 5G antennas are assembled on base stations and positioned to target a user whenever they are in range. This allows a range of about 100 meters in all directions. So you still need many base stations to cover the network of a city. With beamforming it is possible to target multiple users in the same area at the same time, as each beam can be directed at a single user.

How are the beams targeted to users and how are they then tracked?

The user’s position signal is received by different parts of the 5G antennas. On each of these parts, there is a shift in the time of arrival of the signal, depending on the angle at which it hits the antenna. With mathematical models that incorporate these different time shifts, it is possible to locate the user and target the beam in their direction.

Then you have to track the users, and that’s more complicated. Base stations use sets of fixed beams that point at preset angles. There is a mechanism that allows the user’s device to measure the power of the received beam relative to adjacent beams. The device sends this information back to the base station, which is then able to choose the best beam.

What are the main difficulties when it comes to implementing beamforming?

FK: Today the 5G network still cannot work without the 4G network because of the short range of the beams, which makes its use only effective and useful in urban environments, and especially in hotspots. In more remote areas, 4G takes over. Beamforming cannot be used for a mobile user located several hundred meters from the antenna – let alone a few kilometers away in the countryside. Another difficulty encountered is the movement of users as they move from one base station to another. Algorithms are being developed to anticipate these movements, which is also what we are working on at EURECOM.

Should we expect the next generation of mobile communications, 6G, to go even further than beamforming?

FK: With every generation, there is a breakthrough. For example, 3G was initially designed as a voice communication network, then all the aspects related to internet data were implemented. For 4G it was the other way around: the network was designed to carry internet data, then voice communication was implemented. The operating principle of 6G has not yet been clearly defined. There’s roughly one new generation of cell phones every ten years, so it shouldn’t be long before the foundation for 6G is laid, and we’ll know more about the future of beamforming.

Interview by Antonin Counillon

LCA

What is life cycle analysis?

Life cycle analysis (LCA) is increasingly common, in particular for eco-design or to obtain a label. It is used to assess the environmental footprint of a product or service by taking into account as many sources as possible. In the following interview, Miguel Lopez-Ferber, a researcher in environmental assessment at IMT Mines Alès, offers insights about the benefits and complexity of this tool.

What is life cycle analysis?

Miguel Lopez-Ferber: Life cycle analysis is a tool for considering all the impacts of a product or service over the course of its life, from design to dismantling of assemblies, and possibly recycling – we also refer to this as “cradle to grave.” It’s a multi-criteria approach that is as comprehensive as possible, taking into account a wide range of environmental impacts. This tool is crucial for analyzing performance and optimizing the design of goods and services.

Are there standards?

MLF: Yes, there are European regulations and today there are standards, in particular ISO standards 14040 and 14044. The first sets out the principles and framework of the LCA. It clearly presents the four phases of a LCA study: determining the objectives and scope of the study; the inventory phase; assessing the impact, and the interpretation phase. The ISO 14044 standard specifies the requirements and guidelines.

What is LCA used for?

MLF: The main benefit is that it allows us to compare different technologies or methods to guide decision-making. It’s a tremendous tool for companies looking to improve their products or services. For example, the LCA will immediately pinpoint the components of a product with the biggest impact. Possible substitutes for this component may then be explored, while studying the impacts these changes could lead to. And the same goes for services. Another advantage of the “life cycle” view is that it takes impact transfer into account. For example, in order to lower the impact of an oven’s power consumption, we can improve its insulation. But that will require more raw material and increase the impact of production. The LCA allows us to take these aspects into account and compare the entire lifetime of a product. The LCA is a very powerful tool for quickly detecting these impact transfers.

How is this analysis carried out?

MLF: The ISO 14040 and 14044 standards clearly set out the procedure. Once the framework of the study and objectives have been identified, the inflows and outflows associated with the product or service must be determined – this is the inventory phase. These flows must be brought back to flows from the environment. To do so, there are growing databases, with varying degrees of ease of access, containing general or specialized information. Some focus on agricultural products and their derivatives, others on plastics or electricity production. This information about flows is collected, assembled and related to the flow for a functional unit (FU) that makes it possible to make comparisons. There is also accounting software to help compile the impacts of various stages of a product or service.  

The LCA does not directly analyze the product, but its function, and it is able to compare very different technology. So we will define a FU that focuses on the service provided. Take two shoe designs, for example. Design A is of very high quality so it requires more material to be produced, but lasts twice as long as Design B. Design A may have greater production impacts, but it will be equivalent to two Design Bs over time. For the same service provided, Design A could ultimately have a lower impact.

What aspects are taken into account in the LCA?

MLF: The benefit of life cycle analysis is that it has a broad scope, and therefore takes a wide range of factors into account. This includes direct as well as indirect impacts, consumption of resources such as raw material extraction, carbon footprint, and pollution released. So there is a temporal aspect, since the entire lifetime of a good or service must be studied, a geographical aspect, since several sites are taken into consideration, and the multi-criteria aspect, meaning all the environmental compartments. 

Who conducts the LCA?

MLF: When they are able to, and have the expertise to do so, companies have them done in-house. This is increasingly common. Otherwise, they can hire a consulting firm to conduct them. In any case, if the goal is to share this information with the public, the findings must be made available so that they can be reviewed, verified and validated by outside experts.

What are the current limitations of the tool?

MLF: There is the question of territoriality. For example, power consumption will not have the same impact from one country to another. In the beginning, we used global averages for LCA. We now have continental, and even national averages, but not yet regional ones. The more specific the data, the more accurate the LCA will be.  

Read more on I’MTech: The many layers of our environmental impact

Another problem is additional or further impacts. We operate under the assumption that impacts are cumulative and linear, meaning that manufacturing two pens doubles the impacts of a single pen. But this isn’t always the case. Imagine if a factory releases a certain amount of pollutants – this may be sustainable if it is alone, but not if three other companies are also doing so. After a certain level, the environmental impact may increase.  

And we’re obviously limited by our scientific knowledge. Environmental and climate impacts are complex and the data changes in response to scientific advances. We’re also starting to take social aspects into consideration, which is extremely complex but very interesting.

By Tiphaine Claveau

NFV, Import et export dans le cloud, virtualisation.

What is NFV (Network Function Virtualization) ?

The development of 5G has been made possible through the development of new technologies. The role of Network Function Virtualization, or NFV, is to virtualize network equipment. Adlen Ksentini, a researcher at EURECOM, gives us a detailed overview of this virtualization.

 

What is NFV ?

Adlen Ksentini:  NFV is the virtualization of network functions, a system that service providers and network operators had hoped for in order to decouple software from hardware. It’s based on cloud computing: the software can be placed in a virtual environment – the cloud – and be run on PCs every day. The goal is to be able to use software that implements a network function and run it on different types of hardware, instead of having to purchase dedicated hardware.

How does it work?

A.K.: It relies on the use of a hypervisor, a virtualization layer that makes it possible to abstract the hardware. The goal is to virtualize the software that implements a network function to make it run on a virtual machine or a cloud-based container.

What kind of functions are virtualized?

A.K. : When we talk about network functions, it could refer to the router that sends packets to the right destination, firewalls that protect networks, DNS servers that translate domain names into IP addresses, or intrusion detection. All of these functions will be deployed in virtual machines or containers, so that a small or medium-sized company, for example, doesn’t have to invest in infrastructure to host these services, and may instead rent them from a cloud services provider, using the Infrastructure as a Service (IaaS) model.

What are the advantages of NFV?

A.K.: NFV provides all the benefits of cloud computing. First of all, it lowers costs since you only have to pay for the resources used. It also provides greater freedom since the virtualization layer enables it to be work on several types of hardware. It also makes it possible to react according to varying degree of traffic. If there’s a sudden rise in traffic it’s possible to scale up to respond to the demands.

Performance is another factor involved. Under normal circumstances, the computer’s operating system will not dedicate all of the processor’s capacity to a single task – it will spread it out and performance may suffer. The benefit of cloud computing is that it can take advantage of the almost unlimited resources of the cloud. This also makes for greater elasticity, since resources can be freed up when they are no longer needed.

Why is this technology central to 5G?

A.K.: 5G core networks are virtualized, they will run natively in the cloud. So we need software that is able to run these network functions in the cloud. NFV provides a number of advantages and that’s why it is used for the core of 5G. NFV and SDN are complementary and make it possible to obtain a virtual network.

Read more on I’MTech: What is SDN (Software-Defined networking)?

What developments are ahead for NFV?

A.K. : Communication technologies have created a framework for orchestrating and managing virtual resources, but the standard continues to evolve and a number of studies seek to improve it. Some aim to work on the security aspect, to better defend against attacks. But we’re also increasingly hearing about using artificial intelligence to enable the operator to improve resources without human intervention. That’s the idea behind Zero Touch Management, so that NFV networks can be self-correcting, self-manageable and, of course, secure.

 

Tiphaine Claveau for I’MTech

SDN

What is SDN (Software-Defined Networking)?

5G is coming and is bringing a range of new technologies to the table, including Software-Defined Networking. It is an essential element of 5G, and is a network development concept with a completely new infrastructure. Adlen Ksentini, a researcher at EURECOM, presents the inner workings of SDN.

 

How would you define SDN?

Adlen Ksentini: Software-Defined Networking (SDN) is a concept that was designed to “open up” the network, to make it programmable in order to manage its resources dynamically: on-demand routing, load distribution between equipment, intrusion detection, etc. It is an approach that allows network applications to be developed using a classic programming language, without worrying about how it will be deployed.

A central controller (or SDN controller) with overall control over the infrastructure will take care of this. This creates more innovation and productivity, but above all greater flexibility. SDN has evolved significantly in recent years to be integrated into programming networks such as 5G.

How does SDN “open up” the network?

AK: A network is arranged in the following way: there is a router, a kind of traffic agent for data packets, then a control plan that decides where those data packets go, and a transmission plan that transmits them.

The initial aim was to separate the control plan from the data flow plan in the equipment because each piece of equipment had its own configuration method. With SDN, router configuration is shared and obtained via an application above the SDN controller.  The application uses the functions offered by the SDN controller, and the SDN controller translates these functions into a configuration understood by the routers.  Communication between the SDN controller and the routers is done through a standardized protocol, such as OpenFlow.

How was the SDN developed?

AK:  The term first appeared about ten years ago and has been widely used ever since Cloud Computing became commonly used. “On-demand” networks were created, with virtual machines that then needed to be interconnected. This is the purpose of the SDN controller that will link these virtual machines together, translating information coming from different services. The concept has evolved and become a core technology, making it possible to virtualize infrastructure.

Why is it an essential part of 5G?

AK: 5G is intended for use in different markets. For example, Industry 4.0 or augmented reality require a variety of network services. Industry 4.0 will require very low latency, while augmented reality will focus on high bandwidth. To manage these different types of services, 5G will use the concept of network slicing.

This consists in virtualizing a structure in order to share it.  SDN is the key to interconnecting them, as it creates the ability to allocate network resources on demand. Thanks to this flexibility, it is possible to create specific network slices for each use. This is the principle of core network virtualization that is fundamental to 5G.

How does this principle of “resources on demand” work?

AK:  Imagine a company that does not have enough resources to invest in hardware. They will rent a virtual network: a cloud service offered for example by Amazon, requesting resources defined according to their needs. It could be a laboratory that wants to run simulations but does not have the computing capacity. They would use a cloud operator who will run these simulations for them. Storage capacity, computing power, bandwidth, or latency are thus configured to best meet the needs of the company or laboratory.

Why do we talk about new infrastructure with the SDN?

AK: The shift from 3G to 4G was an improvement in throughput or bandwidth, but was basically the same thing. 5G, with SDN, has a better infrastructure through this virtualization and can not only capture classic mobile phone users, but also open the market to industries.

SDN offers unique flexibility to develop innovative services and open the networks to new uses, such as autonomous vehicles, e-health, industry 4.0, or augmented reality. All these services have special needs and we need a network that can connect all these resources, which will certainly be virtual.

Tiphaine Claveau for I’MTech

tribology

What is tribology?

The science of friction: this is the definition of tribology. Tribology is a focal point shared by several disciplines and an important field of study for industrial production. Far from trivial, friction is a particularly complex phenomenon. Christine Boher, a tribologist at IMT Mines Albi[1], introduces us to this subject.

 

What does tribology study?

Christine Boher: The purpose of tribology is to understand what happens at the contact surface of two materials when they rub, or are in “relative movement” as we call it. Everyone is aware of the notion of friction: rubbing your hands together to warm up is friction, playing the guitar, skiing, braking, oiling machines, all involve friction. Friction induces forces that oppose the movement, resulting in damage. We are trying to understand these forces by studying how they manifest themselves, and the consequences they have on the behavior of materials. Tribology is therefore the science of friction, wear and lubrication. The phenomenon of wear and tear may seem terribly banal, but when you look more closely, you realize how complex it is!

What scientific expertise is used in this discipline?

CB: It is a “multiscience”, because it involves many disciplines. A tribologist can be a researcher specializing in solid mechanics, fluid mechanics, materials, vibratory behavior, etc. Tribology is the conjunction of all these disciplinary fields, and this is what makes it so complex. Personally, I specialize in material sciences.

Why is friction so interesting?

CB: You first need to understand the role of friction in contact. Although it sounds intuitive, when two materials rub together, many phenomena occur: the surface temperature increases, the mechanical behavior of both parts is changed, particles are created due to wear, which have an impact on the load and sliding speed. As a result, material properties arise which would not have happened without friction. Tribology focuses on both the macrometric and micrometric aspects of the surfaces of materials in contact.

How is the behavior of a material changed by friction?

CB: Take for example the wear particles generated during friction. As they are generated, they can decrease the frictional resistance between the two bodies. They then act as a solid lubricant, and in most cases they have a rather useful, desirable effect. However, these particles can damage the materials if they are too hard. If this is the case, they will accelerate the wear. Tribologists therefore try to model how, during friction, these particles are generated and under what conditions they are produced in optimal quantities.

Another illustration is the temperature increase of parts. In some cases of high-speed friction, the temperature of the materials can rise from 20°C to 700°C in just a few minutes. The mechanical properties of the material are then completely different.

Could you illustrate an application of tribology?

CB: Take the example of a rolling mill, a large tool designed to produce sheet metal by successive reductions in thickness. There is a saying in the discipline: “no friction, no lamination”. If problems arise during friction, that is, if there are problems of contact between the surface of the sheets and those of the cylinders, the sheets will be damaged. For the automotive industry, this means body sheets are damaged during the production phase, compromising surface integrity. To avoid this, we are working in collaboration with the relevant industrialists either on new metallurgical alloys or on new coatings to be put on the rollers. The purpose of the coating is to protect the material from wear and to slow down the damage of the working surfaces as much as possible.

Who are the main beneficiaries of tribology research?

CB: We work with manufacturers in the shaping industry, such as ArcelorMittal, or Aubert & Duval. We also have partnerships with companies in the aeronautics sector, such as Ratier Figeac. Generally, we are called in by major groups or subcontractors of major industrial groups because they are interested in increasing their speeds, and this is where friction-related performance becomes important.

 

[1] Christine Boher is a researcher at the Institut Clément Ader, a joint research unit
of IMT Mines Albi/CNRS/INSA Toulouse/ISAE Supaero/University Toulouse III Paul Sabatier/Federal University Toulouse Midi-Pyrénées.

Image d'un globe terrestre vert - écoconception, eco-design

What is eco-design?

In industry, it is increasingly necessary to design products and services with concern and respect for environmental issues.  Such consideration is expressed through a practice that is gaining ground in a wide range of sectors: eco-design. Valérie Laforest, a researcher in environmental assessment and environmental engineering and organizations at Mines Saint-Étienne, explains the term.

 

What does eco-design mean?

Valérie Laforest: The principle of eco-design is to incorporate environmental considerations from the earliest stages of creating a service or product, meaning from the design stage. It’s a method governed by standards, at the national and international level, describing concepts and setting out current best practices for eco-design. We can just as well eco-design a building as we can a tee-shirt or a photocopying service.

Why this need to eco-design?

VL: There is no longer any doubt about the environmental pressure on the planet. Eco-design is one concrete way for us to think about how our actions impact the environment and consider alternatives to traditional production. Instead of producing, and then looking for solutions, it’s much more effective and efficient to ask questions from the design stage of a product to reduce or avoid the environmental impact.

What stages does eco-design apply to?

VL: In concrete terms, it’s based entirely on the life cycle of a system, from very early on in its existence. Eco-design thinking takes into account the extraction of raw materials, as well as the processing and use stages, until end of life. If we recover the product when it is no longer usable, to recycle it for example, that’s also an example of eco-design. As it stands today, end-of-life products are either sent to landfills, incinerated or recycled. Eco-design means thinking about the materials that can be used, but also thinking about how a product can be dismantled so as to be incorporated within another cycle.

When did we start hearing about this principle?

VL: The first tools arrived in the early 2000s but the concept may be older than that. Environmental issues and associated research have increased since 1990. But eco-design really emerged in a second phase when people started questioning the environmental impact of everyday things: our computer, sending an email, the difference between a polyester or cotton tee-shirt.

What eco-design tools are available for industry?

VL: The tools can fall into a number of categories. There are relatively simple ones, like check-lists or diagrams, while others are more complex. For example, there are life-cycle analysis tools to identify the environmental impacts, and software to incorporate environmental indicators in design tools. The latter require a certain degree of expertise in environmental assessment and a thorough understanding of environmental indicators. And developers and designers are not trained to use these kinds of tools.

Are there barriers to the development of this practice?

VL: There’s a real need to develop special tools for eco-design. Sure, some already exist, but they’re not really adapted to eco-design and can be hard to understand. This is part of our work as researchers, to develop new tools and methods for the environmental performance of human activities. For example, we’re working on projects with the Écoconception center, a key player in the Saint-Etienne region as well as at the national level.

In addition to tools, we also have to go visit companies to get things moving and see what’s holding them back. We have to consider how to train, change and push companies to get them to incorporate eco-design principles. It’s an entirely different way of thinking that requires an acceptance phase in order to rethink how they do things.

Is the circular economy a form of eco-design?

VL: Or is eco-design a form of the circular economy? That’s an important question, and answers vary depending on who you ask. Stakeholders who contribute to the circular economy will say that eco-design is part of this economy. And on the other side, eco-design will be seen as an initiator of the circular economy, since it provides a view of the circulation of material in order to reduce the environmental impact. What’s certain is that the two are linked.

Tiphaine Claveau for I’MTech

[box type=”info” align=”” class=”” width=””]

This article was published as part of Fondation Mines-Télécom‘s 2020 brochure series dedicated to sustainable digital technology and the impact of digital technology on the environment. Through a brochure, conference-debates, and events to promote science in conjunction with IMT, this series explores the uncertainties and challenges of the digital and environmental transitions.

[/box]

 

digital twin

What is a digital twin?

Digital twins, digital doubles – what exactly do these terms mean? Raksmey Phan, an engineer at the Mines Saint-Étienne Centre for Biomedical and Health Engineering (CIS)[1], talks to us about the advantages and advances offered by these new tools, as well as the issues involved.

 

What does a digital twin refer to?

Raksmey Phan: If you have a digital, mathematical model representing a real system, based on data from this real system, then you have a digital twin. Of course, the quality of the digital twin depends first and foremost on the mathematical model. Industrial ovens are a historic example that can help explain this idea.

To create a digital twin, we record information about the oven, which could include its operating hours or the temperature each time it’s used. Combined with algorithms that take into account the physical components that make up the oven, this digital twin will calculate its rate of wear and tear and anticipate breakdown risks. The use of the oven can then be monitored in real time and simulated in its future state with different use scenarios in order to plan for its replacement.

In what fields are they used?

RP: They can be used in any field where there is data to be recorded. We could say that climatologists make a digital twin of our planet: based on observational data recorded about our planet, they run simulations, and therefore mathematical models, resulting in different scenarios. To give another example, at the Mines Saint-Étienne CIS, we have scientists such as Xiaolan Xie, who are internationally renowned for their experience and expertise in the field of modeling healthcare systems. One of our current projects is a digital twin of the emergency department at Hôpital Nord de Saint-Étienne, which is located 200 meters from our center.

What advantages do digital twins offer?

RP: Let’s take the example of the digital twin of the emergency room. We’ve integrated anonymized patient pathways over a one-year period in a model of the emergency room. In addition to this mathematical model, we receive data in what can be referred to as ‘pseudo-real time,’ since there is a lapse of one hour from the time patients arrive in the department. This makes it possible for us to do two important things. The first is to track the patients’ movement through the department in pseudo-real time, using the data received and the analysis of pathway records. The second is the ability to plan ahead and predict future events. Imagine if there was a bus accident in the city center. Since we know what types of injuries result from such an accident, we can visualize the impact it would have on the department, and if necessary, call in additional staff.

What did people do before there were digital twins?

RP: Companies and industries were already using the concept before the term existed. Since we’ve been using machines, engineers have tried to monitor tools with replicas – whether digitally or on paper. It’s a bit like artificial intelligence. The term is back in fashion but the concept goes back much further. Algorithms are mathematics, and Napoleon used algorithms for his war logistics.

When did the term digital twin first start to be used?

RP: The term ‘digital twin’ was first used in 2002 in articles by Michael Grieves, a researcher at the Florida Institute of Technology. But the concept has existed since we have been trying to model real phenomena digitally, which is to say since the early days of computing. But there has been renewed interest in digital twins in recent years due to the convergence of three scientific and technological innovations. First, the impressive growth in our ability to analyze large amounts of data — Big Data. Second, the democratization of connected sensors — the Internet of Things. And third, renewed interest for algorithms in general, as well as for cognitive sciences — Artificial Intelligence.

How have the IoT and Big Data transformed digital twins?

RP: A digital twin’s quality depends on the quantity and quality of data, as well as on its ability to analyze this data, meaning its algorithms and computing capacity. IoT devices have provided us with a huge amount of data. The development of these sensors is an important factor – production has increased while costs have decreased. The price of such technologies will continue to drop, and at the same time, they will become increasingly accurate. That means that we’ll be able to create digital twins of larger, more complex systems, with a greater degree of accuracy. We may soon be able to make a digital twin of a human being (project in the works at CIS).

Are there technological limitations to digital twins?

RP: Over the last five years, everything’s been moving faster at the technological level. It’s turned into a race for the future. We’ll develop better sensors, and we’ll have more data, and greater computing power. Digital twins will also follow these technological advances. The major limitation is sharing data – the French government was right to take steps towards Open Data, which is free data, shared for the common good. Protecting and securing data warehouses are limiting factors but are required for the technological development of digital twins. In the case of our digital twin of the hospital, this involves a political and financial decision for hospital management.

What are some of the challenges ahead?

RP: The major challenge, which is a leap into the unknown, is ethics. For example, we can assess and predict the fragility of senior citizens, but what should we do with this information after that? If an individual’s health is likely to deteriorate, we could warn them, but without help it will be hard for them to change their lifestyle. However, the information may be of interest to their insurance providers, who could support individuals by offering recommendations (appropriate physical activity, accompanied walks etc.) This example hinges on the issues of confidentially and anonymization of data, not to mention the issue of informed consent of the patient.

But it’s incredible to be talking about confidentiality, anonymization and informed consent as a future challenge  — although it certainly is the case — when for the past ten years or so, a portion of the population has been publishing their personal information on social media and sharing their data with wellness applications whose data servers are often located on another continent.

[1] Raksmey Phan is a researcher at the Laboratory of Informatics, Modelling and Optimization of the Systems (LIMOS), a joint research unit between Mines Saint-Étienne/CNRS/Université Clermont-Auvergne.

Read on I’MTech: