Posts

interference

Interference: a source of telecommunications problems

The growing number of connected objects is set to cause a concurrent increase in interference, a phenomenon which has remained an issue since the birth of telecommunications. In the past decade, more and more research has been undertaken in this area, leading us to revisit the way in which devices handle interference.

Throughout the history of telecommunications, we have observed an increase in the quantities of information being exchanged,” states Laurent Clavier, telecommunications researcher at IMT Nord Europe. “This phenomenon can be explained by network densification in particular,” adds the researcher. The increase in the amount of data circulating is paired with a rise in interference, which represents a problem for network operations.

To understand what interference is, first, we need to understand what a receiver is. In the field of telecommunications, a receiver is a device that converts a signal into usable information — like an electromagnetic wave into a voice. Sometimes, undesired signals disrupt the functioning of a receiver and damage the communication between several devices. This phenomenon is known as interference and the undesired signal, noise. It can cause voice distortion during a telephone call, for example.

Interference occurs when multiple machines use the same frequency band at the same time. To avoid interference, receivers choose which signals they pick up and which they drop. While telephone networks are organized to avoid two smartphones interfering with each other, this is not the case for the Internet of Things, where interference is becoming critical.

Read on I’MTech: Better network-sharing with NOMA

Different kinds of noise causing interference

With the boom in the number of connected devices, the amount of interference will increase and cause the network to deteriorate. By improving machine receivers, it appears possible to mitigate this damage. Most connected devices are equipped with receivers adapted for Gaussian noise. These receivers make the best decisions possible as long as the signal received is powerful enough.

By studying how interference occurs, scientists have understood that it does not follow a Gaussian model, but rather an impulsive one. “Generally, there are very few objects that function together at the same time as ours and near our receiver,” explains Clavier. “Distant devices generate weak interference, whereas closer devices generate strong interference: this is the phenomenon that characterizes impulsive interference,” he specifies.

Reception strategies implemented for Gaussian noise do not account for the presence of these strong noise values. They are therefore easily misled by impulsive noise, with receivers no longer able to recover the useful information. “By designing receivers capable of processing the different kinds of interference that occur in real life, the network will be more robust and able to host more devices,” adds the researcher.

Adaptable receivers

For a receiver to be able to understand Gaussian and non-Gaussian noise, it needs to be able to identify its environment. If a device receives a signal that it wishes to decode while the signal of another nearby device is generating interference, it will use an impulsive model to deal with the interference and decode the useful signal properly. If it is in an environment in which the devices are all relatively far away, it will analyze the interference with a Gaussian model.

To correctly decode a message, the receiver must adapt its decision-making rule to the context. To do so, Clavier indicates that a “receiver may be equipped with mechanisms that allow it to calculate the level of trust in the data it receives in a way that is adapted to the properties of the noise. It will therefore be capable of adapting to both Gaussian and impulsive noise.” This method, used by the researcher to design receivers, means that the machine does not have to automatically know its environment.

Currently, industrial actors are not particularly concerned with the nature of interference. However, they are interested in the means available to avoid it. In other words, they do not see the usefulness of questioning the Gaussian model and undertaking research into the way in which interference is produced. For Clavier, this lack of interest will be temporary, and “in time, we will realize that we will need to use this kind of receiver in devices,” he notes. “From then on, engineers will probably start to include these devices more and more in the tools they develop,” the researcher hopes.

Rémy Fauvel

David Gesbert, winner of the 2021 IMT-Académie des Sciences Grand Prix

EURECOM researcher David Gesbert is one of the pioneers of Multiple-Input Multiple-Output (MIMO) technology, used nowadays in many wireless communication systems. He contributed to the boom in WiFi, 3G, 4G and 5G technology, and is now exploring what could be the 6G of the future. In recognition of his body of work, Gesbert has received the IMT-Académie des Sciences Grand Prix.

I’ve always been interested by research in the field of telecommunications. I was fascinated by the fact that mathematical models could be converted into algorithms used to make everyday objects work,” declares David Gesbert, researcher and specialist in wireless telecommunications systems at EURECOM. Since he completed his studies in 1997, Gesbert has been working on MIMO, a telecommunications system that was created in the 1990s. This technology makes it possible to transfer data streams at high speeds, using multiple transmitters and receivers (such as telephones) in conjunction. Instead of using a single channel to send information, a transmitter can use multiple spatial streams at the same time. Data is therefore transferred more quickly to the receiver. This spatialized system represents a breaking point with previous modes of telecommunication, like the Global System for Mobile Communications (GSM).

It has proven to be an important innovation, as MIMO is now broadly used in WiFi systems and several generations of mobile telephone networks, such as 4G and 5G. After receiving his PhD from École Nationale Supérieure des Télécommunications in 1997, Gesbert completed two years of postdoctoral research at Stanford University. He joined the telecommunications laboratory directed by Professor Emeritus Arogyaswami Paulraj, an engineer who worked on the creation of MIMO. In the early 2000s, the two scientists, accompanied by two students, launched the start-up Iospan Wireless. This was where they developed the first high-speed wireless modem using MIMO-OFDM technology.

OFDM: Orthogonal Frequency-Division Multiplexing

OFDM is a process that improves communication quality by dividing a high-debit data stream into many low-debit data streams. By combining this mechanism with MIMO, it is possible to transfer data at high speeds while making the information generated by MIMO more robust against radio distortion. “These features make it great for use in deploying telecommunications systems like 4G or 5G,” adds the researcher.  

In 2001, Gesbert moved to Norway, where he taught for two years as adjunct professor in the IT department at the University of Oslo. One year later, he published an article in which he described that complex propagation environments favor the functioning of MIMO. “This means that the more obstacles there are in a place, the more the waves generated by the antennas are reflected. The waves therefore travel different paths and interference is reduced, which leads to more efficient data transfer. In this way, an urban environment in which there are many buildings, cars, and other objects will be more favorable to MIMO than a deserted area,” explains the telecommunications expert.  

In 2003, he joined EURECOM, where he became a professor and five years later, head of the Mobile Communications department. There, he has continued his work aiming to improve MIMO. His research has shown him that base stations — also known as relay antennas — could be useful to improve the performance of this mechanism. By using antennas from multiple relay stations far apart from each other, it would be possible to make them work together and produce a giant MIMO system. This would help to eliminate interference problems and optimize the circulation of data streams. Research is still being performed at present to make this mechanism usable.

MIMO and robots

In 2015, Gesbert obtained an ERC Advanced Grant for his PERFUME project. The initiative, which takes its name from high PERfomance FUture Mobile nEtworking, is based on the observation that “the number of receivers used by humans and machines is currently rising. Over the next few years, these receivers will be increasingly connected to the network,” emphasizes the researcher. The aim of PERFUME is to exploit the information resources of receivers so that they work in cooperation, to improve their performance. The MIMO principle is at the heart of this project: spatializing information and using multiple channels to transmit data. To achieve this objective, Gesbert and his team developed base stations attached to drones. These prototypes use artificial intelligence systems to communicate between one another, in order to determine which bandwidth to use or where to place themselves to give a user optimal network access. Relay drones can also be used to extend radio range. This could be useful, for example, if someone is lost on a mountain, far from relay antennas, or in areas where a natural disaster has occurred and the network infrastructure has been destroyed.

As part of this project, the EURECOM professor and his team have performed research into decision-making algorithms. This has led them to develop artificial neuron networks to improve decision-making processes performed by the receivers or base stations desired to cooperate together. With these neuron networks, the devices are capable of quantifying and exploiting the information held by each of themAccording to Gesbert, “this will allow receivers or stations with more information to correct flaws in receivers with less. This idea is a key takeaway from the PERFUME project, which finished at the end of 2020. It indicates that to cooperate, agents like radio receivers or relay stations make decisions based on sound data, which sometimes has to be rejected to let themselves be guided by decisions from agents with access to better information than them. It is a surprising result, and a little counterintuitive.”

Towards the 6th generation of mobile telecommunications technology

“Nowadays, two major areas are being studied concerning the development of 6G,” announces Gesbert. The first relates to ways of making networks more energy efficient by reducing the number of times that transmissions take place, by restricting the amount of radio waves emitted and reducing interference. One solution to achieve these objectives is to use artificial intelligence. “This would make it possible to optimize resource allocation and use radio waves in the best way possible,” adds the expert.

The second concerns applications of radio waves for purposes other than communicating information. One possible use for the waves would be to produce images. Given that when a wave is transmitted, it reflects off a large number of obstacles, artificial intelligence could analyze its trajectory to identify the position of obstacles and establish a map of the receiver’s physical environment. This could, for example, help self-driving cars determine their environment in a more detailed way. With 5G, the target precision for locating a position is around a meter, but 6G could make it possible to establish centimeter-level precision, which is why these radio imaging techniques could be useful. While this 6th-generation mobile telecommunications network will have to tackle new challenges, such as the energy economy and high-accuracy positioning, it seems clear that communication spatialization and MIMO will continue to play a fundamental role.

Rémy Fauvel

Facebook

Facebook: a small update causes major disruption

Hervé Debar, Télécom SudParis – Institut Mines-Télécom

Late on October 4, many users of Facebook, Instagram and WhatsApp were unable to access their accounts. All of these platforms belong to the company Facebook and were all affected by the same type of error: an accidental and erroneous update to the routing information for Facebook’s servers.

The internet employs various different types of technology, two of which were involved in yesterday’s incident: BGP (border gateway protocol) and DNS (domain name system).

In order to communicate, each machine must have an IP address. Online communication involves linking two IP addresses together. The contents of each communication are broken down into packets, which are exchanged by the network between a source and a destination.

How BGP (border gateway protocol) works

The internet is comprised of dozens of “autonomous systems”, or AS, some very large, and others very small. Some AS are interconnected via exchange points, enabling them to exchange data. Each of these systems is comprised of a network of routers, which are connected using either optical or electrical communication links. Communication online circulates using these links, with routers responsible for transferring communications between links in accordance with routing rules. Each AS is connected to at least one other AS, and often several at once.

When a user connects their machine to the internet, they generally do so via an internet service provider or ISP. These ISPs are themselves “autonomous systems”, with address ranges which they allocate to each of their clients’ machines. Each router receiving a packet will analyse both the source and the destination address before deciding to transfer the packet to the next link, following their routing rules.

In order to populate these routing rules, each autonomous system shares information with other autonomous systems describing how to associate a range of addresses in their possession with an autonomous system path. This is done step by step through the use of the BGP or border gateway protocol, ensuring each router has the information it needs to transfer a packet.

DNS (domain name system)

The domain name system was devised in response to concerns surrounding the lack of transparency with IP addresses for end users. For available servers on the internet, this links “facebook.com” with the IP address “157.240.196.35”.

Each holder of a domain name sets up (or delegates) a DNS server, which links domain names to IP addresses. They are considered to be the most reliable source (or authority) for DNS information, but are also often the first cause of an outage – if the machine is unable to resolve a name (i.e. to connect the name requested by the user to an address), then the end user will be sent an error message.

Each major internet operator – not just Facebook, but also Google, Netflix, Orange, OVH, etc. – has one or more autonomous systems and coordinates the respective BGP in conjunction with their peers. They also each have one or more DNS servers, which act as an authority over their domains.

The outage

Towards the end of the morning of October 4, Facebook made a modification to its BGP configuration which it then shared with the autonomous systems it is connected to. This modification resulted in all of the routes leading to Facebook disappearing, across the entire internet.

Ongoing communications with Facebook’s servers were interrupted as a result, as the deletion of the routes spread from one autonomous system to the next, since the routers were no longer able to transfer packets.

The most visible consequence for users was an interruption to the DNS and an error message, followed by the DNS servers of ISPs no longer being able to contact the Facebook authoritative server as a result of the BGP error.

This outage also caused major disruption on Facebook’s end as it rendered remote access and, therefore, teleworking, impossible. Because they had been using the same tools for communication, Facebook employees found themselves unable to communicate with each other, and so repairs had to be carried out at their data centres. With building security also online, access proved more complex than first thought.

Finally, with the domain name “facebook.com” no longer referenced, it was identified as free by a number of specialist sites for the duration of the outage, and was even put up for auction.

Impact on users

Facebook users were unable to access any information for the duration of the outage. Facebook has become vitally important for many communities of users, with both professionals and students using it to communicate via private groups. During the outage, these users were unable to continue working as normal.

Facebook is also an identity provider for many online services, enabling “single sign-on”, which involves users reusing their Facebook accounts in order to access services offered by other platforms. Unable to access Facebook, users were forced to use other login details (which they may have forgotten) in order to gain access.

Throughout the outage, users continued to request access to Facebook, leading to an increase in the number of DNS requests made online and a temporary but very much visible overload of DNS activity worldwide.

This outage demonstrated the critical role played by online services in our daily lives, while also illustrating just how fragile these services still are and how difficult it can be to control them. As a consequence, we must now look for these services to be operated with the same level of professionalism and care as other critical services.

Banking, for example, now takes place almost entirely online. A breakdown like the one that affected Facebook is less likely to happen to a bank given the standards and regulations in place for banking, such as the Directive On Network And Service Securitythe General Data Protection Regulation or PCI-DSS.

In contrast, Facebook writes its own rules and is partially able to evade regulations such as the GDPR. Introducing service obligations for these major platforms could improve service quality. It is worth pointing out that no bank operates a network as impressive as Facebook’s infrastructure, the size of which exacerbates any operating errors.

More generally, after several years of research and standardisation, safety mechanisms for BGP and DNS are now being deployed, the aim being to prevent attacks which could have a similar impact. The deployment of these security mechanisms will need to be accelerated in order to make the internet more reliable.

Hervé Debar, Director of Research and PhDs, Deputy director, Télécom SudParis – Institut Mines-Télécom

This article has been republished from The Conversation under a Creative Commons licence. Read the original article.

réseaux optiques, optical networks

The virtualization of optical networks to support… 5G

Mobile networks are not entirely wireless. They also rely on a network of optical fibers, which connect antennas to the core network, among other things. With the arrival of 5G, optical networks must be able to keep up with the ramping up of the rest of the mobile network to ensure the promised quality of service. Two IMT Atlantique researchers are working on this issue, by making optical networks smarter and more flexible.  

In discussions of issues surrounding 5G, it is common to hear about the installation of a large number of antennas or the need for compatible devices. But we often overlook a crucial aspect of mobile networks: the fiber optic infrastructure on which they rely. Like previous generations, 5G relies on a wired connection in most cases. This technology is also used in the “last mile”. It therefore makes it possible to connect antennas to core network equipment, which is linked to most of the connected machines around the world. It can also connect various devices within the same antenna site.

In reality, 5G is even more dependent on this infrastructure than previous generations since the next-generation technology comes with new requirements related to new uses, such as the Internet of Things (IoT). For example, an application such as an autonomous car requires high availability, perfect reliability, very-low latency etc. All of these constraints weigh on the overall architecture, which includes fiber optics. If they cannot adapt to new demands within the last mile, the promises of 5G will be jeopardized. And new services (industry 4.0, connected cities, telesurgery etc.) will simply not be able to be provided in a reliable, secure way.

Facilitating network management through better interoperability

Today, optical networks are usually over-provizioned in relation to current average throughput needs. They are designed to be able to absorb 4G peak loads and are neither optimized, nor able to adapt intelligently to fluctuating demand. The new reality created by 5G, therefore represents both a threat for infrastructure in terms of its ability to respond to new challenges, and an opportunity to rethink its management.

Isabel Amigo and Luiz Anet Neto, telecommunications researchers at IMT Atlantique, are working with a team of researchers and PhD students to conduct research in this area. Their goal is to make optical networks smarter, more flexible and more independent from the proprietary systems imposed by vendors. A growing number of operators are moving in this direction. “At Orange, it used to be common to meet specialists in configuration syntaxes and equipment management for just one or two vendors,” explains Luiz Anet Neto, who worked for the French group for five years. “Now, teams are starting to set up a “translation layer” that turns the various configurations, which are specific to each vendor, into a common language that is more straightforward and abstract.”

This “translation layer”, on which he is working with other researchers, is called SDN, which stands for Software-Defined Networking. This model is already used in the wireless part of the network and involves offloading certain functions of network equipment. Traditionally, this equipment fulfills many missions: data processing (receiving and sending packets back to their destination), as well as a number of control tasks (routing protocols, transmission interfaces etc.) With SDN, equipment is relieved from these control tasks, which are centralized within an “orchestrator” entity that can control several devices at once.  

Read more on I’MTech: What is SDN?

There are many benefits to this approach. It provides an overview of the network, making it easier to manage, while making it possible to control all of the equipment, regardless of its vendor without having to know any proprietary language. “To understand the benefit of SDN, we can use an analogy between a personal computer and the SDN paradigm,” says Isabel Amigo. “Today, it would be unthinkable to have a computer that would only run applications that use a specific language. So, machines have an additional layer – the operating system – that is in charge of “translating” the various languages, as well as managing resources, memory, disks etc. SDN therefore aims to act like an operating system, but for the network.” Similarly, the goal is to be able to install applications that are able to work on any equipment, regardless of the hardware vendor. These applications could, for example, distribute the load based on demand.

Breaking our dependence on hardware vendors

SDN often goes hand in hand with another concept, inspired by virtualization in data centers: NFV (Network Functions Virtualization). Its principle: being able to execute any network functionality (not just control functions) on generic servers via software applications.”Usually, dedicated equipment is required for these functions,” says the IMT researcher. “For example, if you want to have a firewall, you need to buy a specific device from a vendor. With NFV, this is no longer necessary: you can implement the function on any server via an application.”

Read more on I’MTech: What is NFV?

As with SDN, the arrival of virtualization in optical networks promotes better interoperability. This makes it harder for vendors to require the use of their proprietary systems linked to their equipment. The market is also changing, by making more room for software developers. “But there is still a long way to go,” says Luiz Anet Neto. “Software providers can also try to make their customers dependent on their products, through closed systems. So operators have to remain vigilant and offer an increasing level of interoperability.”

Operators are working with the academic world precisely for this purpose. They would fully benefit from standardization, which would simplify the management of their optical networks. Laboratory tests carried out by IMT Atlantique in partnership with Orange provide them with technical information and areas to explore ahead of discussions with vendors and standardization bodies.

Sights are already set on 6G

For the research teams, there are many areas for development. First of all, the scientists are seeking to further demonstrate the value of their research, through testing focusing on a specific 5G service (up to now, the experiments have not applied to a specific application). Their aim is to establish recommendations for optical link dimensioning to connect mobile network equipment.

The goal is then to move towards smart optimization of optical networks. To provide an example of how findings by IMT Atlantique researchers may be applied, it is currently possible to add a “probe” that can determine if a path is overloaded and shift certain services to another link if necessary. The idea would then be to develop more in-depth mathematical modeling of the phenomena encountered, in order to automate incident resolution using artificial intelligence algorithms.

And it is already time for researchers to look toward the future of technology. “Mobile networks are upgraded at a dizzying pace; new generations come out every ten years,” says Luiz Anet Neto. “So we already have to be thinking about how to meet future requirements for 6G!

Bastien Contreras