New multicast solutions could significantly boost communication between cars.

Effective communication for the environments of the future

Optimizing communication is an essential aspect of preparing for the uses of tomorrow, from new modes of transport to the industries of the future. Reliable communications are a prerequisite when it comes to delivering high quality services. Researchers from EURECOM, in partnership with The Technical University of Munich are working together to tackle this issue, developing new technology aimed at improving network security and performance.

 

In some scenarios involving wireless communication, particularly in the context of essential public safety services or the management of vehicular networks, there is one vital question: what is the most effective way of conveying the same information to a large number of people? The tedious solution would involve repeating the same message over and over again to each individual recipient, using a dedicated channel each time. A much quicker way is what is known as multicast. This is what we use when sending an email to several people at the same time, or when a news anchor is reading us the news. The sender of the information only provides it once, disseminating it via a means enabling them to duplicate it and to send it through communication channels capable of reaching all recipients.

In addition to TV news broadcasts, multicasts are particularly useful for networks comprising machines or objects set to follow on from the arrival of 5G and its future applications. This is the case, for example, with vehicle networks. “In a scenario where cars are all connected to one another, there is a whole bunch of useful information that could be shared with them using multicast technology”, explains David Gesbert, head of the Communication Systems department at EURECOM. “This could be traffic information, notifications about accidents, weather updates, etc.” The issue here is that, unlike TV sets, which do not move about while we are trying to watch the news, cars are mobile.

The mobile nature of recipients means that reception conditions are not always optimal. When driving through a tunnel, behind a large apartment block or when we’re taking our car out of the garage, it will be difficult for communication to reach our car. Despite these constraints – which affect multiple drivers at the same time – we need to be able to receive messages in order for the information service to operate effectively. “The transmission speed of the multicast has to be slowed down in order for it to be able to function with the car located in the worst reception scenario”, explains David Gesbert. What this means is that the flow rate must be lower or more power deployed for all users of the network. Just 3 cars going through a tunnel would be enough to slow down the speed at which potentially thousands of cars receive a message.

Communication through cooperation

For networks with thousands of users, it is simply not feasible to restrict the distribution characteristics in this way. In order to tackle this problem, David Gesbert and his team entered into a partnership with the Technical University of Munich (TUM) within the framework of the German-French Academy for the Industry of the Future. These researchers from France and Germany set themselves the task of devising a solution for multicast communication that would not be constrained by this “worst car” problem. “Our idea was as follows: we restrict ourselves to a small percentage of reception terminals which receive the message, but in order to offset that, we ensure that these same users are able to retransmit the message to their neighbors”, he explains. In other words: in your garage, you might not receive the message from the closest antenna, but the car out on the street 30 feet in front of your house will and will then be able to send it efficiently over a short distance.

Researchers from EURECOM and the TUM were thus able to develop an algorithm capable of identifying the most suitable vehicles to target. The message is first transmitted to everyone. Depending on whether or not reception is successful, the best candidates are selected to pass on the rest of the information. Distribution is then optimized for these vehicles through the use of the MIMO technique for multipath propagation. These vehicles will then be tasked with retransmitting the message to their neighbors through vehicle to vehicle communication. The tests carried out on these algorithms indicate a drop in network congestion in certain situations. “The algorithm doesn’t provide much out in the country, where conditions tend mostly to be good for everyone”, outlines David Gesbert. “In towns and cities, on the other hand, the number of users in poor reception conditions is a handicap for conventional multicasts, and it is here that the algorithm really helps boost network performance”.

The scope of these results extends beyond car networks, however. One other scenario in which the algorithm could be used is for the storage of popular content, such as videos or music. “Some content is used by a large number of users. Rather than going to search for them each time a request is made within the core network, these could be stored directly on the mobile terminals of users”, explains David Gesbert. In this scenario, our smartphones would no longer need to communicate with the operator’s antenna in order to download a video, but instead with another smartphone with better reception in the area onto which the content has already been downloaded.

More reliable communication for the uses of the future

The work carried out by EURECOM and the TUM into multicast technology has its roots in a more global project, SeCIF (Secure Communications for the Industry of the Future). The various industrial sectors set to benefit from the rise in communication between objects need reliable communication. Adding machine-to-machine communication to multicasts is just one of the avenues explored by the researchers. “At the same time, we have also been taking a closer look at what impact machine learning could have on the effectiveness of communication”, stresses David Gesbert.

Machine learning is breaking through into communication science, providing researchers with solutions to design problems for wireless networks. “Wireless networks have become highly heterogeneous”, explains the researcher. “It is no longer possible for us to optimize them manually because we have lost the intuition in all of this complexity”. Machine learning is capable of analyzing and extracting value from complex systems, enabling users to respond to questions that are too difficult to understand.

For example, the French and German researchers are looking at how 5G networks are able to optimize themselves autonomously depending on network usage data. In order to do this, data on the quality of the radio channel has to be fed back from the user terminal to the decision center. This operation takes up bandwidth, with negative repercussions for the quality of calls and the transmission of data over the Internet, for example. As a result, a limit has to be placed on the quantity of information being fed back. “Machine learning enables us to study a wide range of network usage scenarios and to identify the most relevant data to feed back using as little bandwidth as possible”, explains David Gesbert. Without machine learning “there is no mathematical method capable of tackling such a complex optimization problem”.

The work carried out by the German-French Academy will be vital when it comes to preparing for the uses of the future. Our cars, our towns, our homes and even our workplaces will be home to a growing number of connected objects, some of which will be mobile and autonomous. The effectiveness of communications is a prerequisite to ensuring that the new services they provide are able to operate effectively.

 

[box type=”success” align=”” class=”” width=””]

The research work by EURECOM and TUM on multicasting mentionned in this article has been published during the International Conference on Communications (ICC). It received the best paper award (category: Wireless communications) during the event, which is a highly competitive award in this scientific field.

[/box]

domain name

Domain name fraud: is the global internet in danger?

Hervé Debar, Télécom SudParis – Institut Mines-Télécom, Université Paris-Saclay

[divider style=”normal” top=”20″ bottom=”20″]

[dropcap]I[/dropcap]n late February 2019, the Internet Corporation for Assigned Names and Numbers (ICANN), the organization that manages the IP addresses and domain names used on the web, issued a warning on the risks of systemic Internet attacks. Here is what you need to know about what is at stake.

What is the DNS?

The Domain Name Service (DNS) links a domain name (for example, the domain ameli.fr for French health insurance) to an IP (Internet Protocol) address, in this case “31.15.27.86”). This is now an essential service, since it makes it easy to memorize the identifiers of digital services without having their addresses. Yet, like many former types of protocol, it was designed to be robust, but not secure.

 

DNS defines the areas within which an authority will be free to create domain names and communicate them externally. The benefit of this mechanism is that the association between the IP address and the domain name is closely managed. The disadvantage is that several inquiries are sometimes required to resolve a name, in other words, associate it with an address.

Many organizations that offer internet services have one or several domain names, which are registered with the suppliers of this registration service. These service providers are themselves registered, directly or indirectly with ICANN, an American organization in charge of organizing the Internet. In France, the reference organization is the AFNIC, which manages the “.fr” domain.

We often refer to a fully qualified domain name, or FQDN. In reality, the Internet is divided into top-level domains (TLD). The initial American domains made it possible to divide domains by type of organization (commercial, university, government, etc.). Then national domains like “.fr” quickly appeared. More recently, ICANN authorized the registration of a wide variety of top-level domains. The information related to these top-level domains is saved within a group of 13 servers distributed around the globe to ensure reliability and speed in the responses.

The DNS protocol establishes communication between the user’s machine and a domain name server (DNS). This communication allows this name server to be queried to resolve a domain name, in other words, obtain the IP address associated with a domain name. The communication also allows other information to be obtained, such as finding a domain name associated with an address or finding the messaging server associated with a domain name in order to send an electronic message. For example, when we load a page in our browser, the browser performs a DNS resolution to find the correct address.

Due to the distributed nature of the database, often the first server contacted does not know the association between the domain name and the address. It will then contact other servers to obtain a response, through an iterative or recursive process, until it has queried one of the 13 root servers. These servers form the root level of the DNS system.

To prevent a proliferation of queries, each DNS server locally stores the responses received that associate a domain name and address for a few seconds. This cache makes it possible to respond more quickly if the same request is made within a brief interval.

Vulnerable protocol

DNS is a general-purpose protocol, especially within company networks. It can therefore allow an attacker to bypass their protection mechanisms to communicate with compromised machines. This could, for example, allow the attacker to control the networks of robots (botnets). The defense response relies on the more specific filtering of communications, for example requiring the systematic use of a DNS relay controlled by the victim organization. The analysis of the domain names contained in the DNS queries, which are associated with black or white lists, is used to identify and block abnormal queries.

abdallahh/Flickr, CC BY

The DNS protocol also makes denial of service attacks possible. In fact, anyone can issue a DNS query to a service by taking over an IP address. The DNS server will respond naturally to the false address. The address is in fact the victim of the attack, because it has received unwanted traffic. The DNS protocol also makes it possible to carry out amplification attacks, which means the volume of traffic sent from the DNS server to the victim is much greater than the traffic sent from the attacker to the DNS server. It therefore becomes easier to saturate the victim’s network link.

The DNS service itself can also become the victim of a denial of service attack, as was the case for DynDNS in 2016. This triggered cascading failures, since certain services rely on the availability of DNS in order to function.

Protection against denial of service attacks can take several forms. The most commonly used today is the filtering of network traffic to eliminate excess traffic. Anycast is also a growing solution for replicating the attacked services if needed.

Cache Poisoning

A third vulnerability that was widely used in the past is to attack the link between the domain name and IP address. This allows an attacker to steal a server’s address and to attract the traffic itself. It can therefore “clone” a legitimate service and obtain the misled users’ sensitive information: Usernames, passwords, credit card information etc. This process is relatively difficult to detect.

As mentioned above, the DNS servers have the capacity to store the responses to the queries they have issued for a few minutes and to use this information to respond to the subsequent queries directly. The so-called cache poisoning attack allows an attacker to falsify the association within the cache of a legitimate server. For example, an attacker can flood the intermediate DNS server with queries and the server will accept the first response corresponding to its request.

The consequences only last a little while, the queries made to the compromised server are diverted to an address controlled by the attacker. Since the initial protocol does not include any means for verifying the domain-address association, the customers cannot protect themselves against the attack.

This often results in internet fragments, with customers communicating with the compromised DNS server being diverted to a malicious site, while customers communicating with other DNS servers are sent to the original site. For the original site, this attack is virtually impossible to detect, except for a decrease in traffic flows. This decrease in traffic can have significant financial consequences for the compromised system.

Security certificates

The purpose of the secure DNS (Domain Name System Security Extensions, DNSSEC) is to prevent this type of attack by allowing the user or intermediate server to verify the association between the domain name and the address. It is based on the use of certificates, such as those used to verify the validity of a website (the little padlock that appears in a browser web bar). In theory, a verification of the certificate is all that is needed to detect an attack.

However, this protection is not perfect. The verification process for the “domain-IP address” associations remains incomplete. This is partly because a number of registers have not implemented the necessary infrastructure. Although the standard itself was published nearly fifteen years ago, we are still waiting for the deployment of the necessary technology and structures. The emergence of services like Let’s Encrypt has helped to spread the use of certificates, which are necessary for secure navigation and DNS protection. However, the use of these technologies by registers and service providers remains uneven; some countries are more advanced than others.

Although residual vulnerabilities do exist (such as direct attacks on registers to obtain domains and valid certificates), DNSSEC offers a solution for the type of attacks recently denounced by ICANN. These attacks rely on DNS fraud. To be more precise, they rely on the falsification of DNS records in the register databases, which means either these registers are compromised, or they are permeable to the injection of false information. This modification of a register’s database can be accompanied by the injection of a certificate, if the attacker has planned this. This makes it possible to circumvent DNSSEC, in the worst-case scenario.

This modification of DNS data implies a fluctuation in the domain-IP address association data. This fluctuation can be observed and possibly trigger alerts. It is therefore difficult for an attacker to remain completely unnoticed. But since these fluctuations can occur on a regular basis, for example when a customer changes their provider, the supervisor must remain extremely vigilant in order to make the right diagnosis.

Institutions targeted

In the case of the attacks denounced by ICANN, there were two significant characteristics. First of all, they were active for a period of several months, which implies that the strategic attacker was determined and well-equipped. Secondly, they effectively targeted institutional sites, which indicates that the attacker had a strong motivation. It is therefore important to take a close look at these attacks and understand the mechanisms the attackers implemented in order to rectify the vulnerabilities, probably by reinforcing good practices.

ICANN’s promotion of the DNSSEC protocol raises questions. It clearly must become more widespread. However, there is no guarantee that these attacks would have been blocked by DNSSEC, nor even that they would have been more difficult to implement. Additional analysis will be required to update the status of the security threat for the protocol and the DNS database.

[divider style=”normal” top=”20″ bottom=”20″]

Hervé Debar, Head of the Networks and Telecommunication Services Department at Télécom SudParis, Télécom SudParis – Institut Mines-Télécom, Université Paris-Saclay

The original article (in French) has been published in The Conversation under a Creative Commons license.

noise

Without noise, virtual images become more realistic

With increased computing capacities, computer-generated images are becoming more and more realistic. Yet generating these images is very time-consuming. Tamy Boubekeur, specialized in 3D Computer Graphics at Télécom ParisTech, is on a quest to solve this problem. He and his team have developed new technology that relies on noise-reduction algorithms and saves computing resources while offering high-quality images.

 

Have you ever been impressed by the quality of an animated film? If you are familiar with cinematic video games or short films created with computer-generated images, you probably have. If not, keep in mind that the latest Star Wars and Fantastic Beasts and Where to Find Them movies were not shot on a satellite superstructure the size of a moon or by filming real magical beasts. The sets and characters in these big-budget films were primarily created using 3D models of astonishing quality. One of the many examples of these impressive graphics: the demonstration by the team from Unreal Engine, a video game engine, at the Game Developers Conference last March. They worked in collaboration with Nvidia and ILMxLAB to create a fictitious scene from Star Wars created using only computer-generated images, for all the characters and sets.

 

To trick viewers, high-quality images are crucial. This is an area Tamy Boubekeur and his team from Télécom ParisTech specialize in. Today, most high-quality animation is produced using a specific type of computer-generated image: photorealistic computer generation using path tracing. This method begins with a 3D model of the desired scene, with the structures, objects and people. Light sources are then placed in the artificial scene: the sun outside, or lamps inside. Then paths are traced starting from the camera—what will be projected on the screen from the viewer’s vantage point—and moving towards the light source. These are the paths light takes as it is reflected off the various objects and characters in the scene. Through these reflections, the changes in the light are associated with each pixel in the image.

This principle is based on the laws of physics and Helmholtz’s principle of reciprocity, which makes it possible to ‘trace the light’ using the virtual sensor,” Tamy Boubekeur explains. Each time the light bounces off objects in the scene, the equations governing the light’s behavior and the properties of the modeled materials and surfaces define the path’s next direction. The spread of the modeled light therefore makes it possible to capture all the changes and optical effects that the eye perceives in real life. “Each pixel in the image is the result of hundreds or even thousands of paths of light in the simulated scene,” the researcher explains. The final color of the pixel is then generated by computing the average of the color responses from each path.

Saving time without noise

The problem is, achieving a realistic result requires a tremendous number of paths. “Some scenes require thousands of paths per pixel and per image: it takes a week of computing to generate the image on a standard computer!” Tamy Boubekeur explains. This is simply too long and too expensive. A film contains 24 images per second. In one year of computing, less than two seconds of a film would be produced on a single machine. Enter noise-reduction algorithms—specifically those developed by the team from Télécom ParisTech. “The point is to stop the calculations before reaching thousands of paths,” the researcher explains. “Since we have not gone far enough in the simulation process, the image still contains noise. Other algorithms are used to remove this noise.” The noise alters the sharpness of the image and is dependent on the type of scene, the materials, lighting and virtual camera.

Research on noise has been carried out and has flourished since 2011. Today, many algorithms exist based on different approaches. Competition is fierce in the quest to achieve satisfactory results. What is at stake in the achieved performance? The programs’ capacity to reduce calculation times and produce a final result without noise. The Bayesian collaborative denoiser (BCD) technology, developed by Tamy Boubekeur’s team, is particularly effective in achieving this goal. Developed from 2014 to 2017 as part of Malik Boudiba’s thesis, the algorithms used in this technology are based on a unique approach.

Normally, noise removal methods attempt to guess the amount of noise present in a pixel based on properties in the observed scene—especially its visible geometry—in order to remove it. “We recognized that the properties of the scene being observed could not account for everything,” Tamy Boubekeur explains. “The noise also originates from areas not visible in the scene, from materials reflecting the light, the semi-transparent matter the light passes through or properties of the modeled optics inside the virtual camera.” A defocused background or a window in the foreground can create varying degrees of noise in the image. The BCD algorithm therefore only takes into account the color values associated with the hundreds of paths calculated before the simulation is stopped, just before the values are averaged into a color pixel. “Our model estimates the noise associated with a pixel based on the distribution of these values, by analyzing similarities with the properties of other pixels and removes the noise from them all at once,” the researcher explains.

A sharp image of Raving Rabbids

The BCD technology was developed as part of the PAPAYA project launched as part of the French National Fund for Digital Society. The project was led in partnership with Ubisoft Motion Pictures to define the key challenges in terms of noise-reduction for professional animation. The company was really impressed by the algorithms in the BCD technology and integrated them into its graphics production engine, Shining. It then used them to produce its animated series, Raving Rabbids. “They liked that our algorithms work with any type of scene, and that the technology is integrated without causing any interference,” Tamy Boubekeur explains. The BCD noise-remover does not require any changes in image calculation methods and can easily be integrated into systems and teams that already have well-established tools.

The source code for the technology has been published in open source on Github. It is freely available, particularly for animation film professionals who prefer open technology over the more rigid proprietary technology. An update to the code integrates an interactive preview module that allows users to adjust the algorithm’s parameters, thus making it easier to optimize the computing resources.

The BCD technology has therefore proven its worth and has now been integrated into several rendering engines. It offers access to high-quality image synthesis, even for those with limited resources. Tamy Boubekeur reminds us that a film like Disney’s Big Hero 6 contains approximately 120,000 images, requires 200 million hours of computing time and the use of thousands of processors to be produced in a reasonable timeframe. For students and amateur artists, these technical resources are inaccessible. Algorithms like those used in the BCD technology offer them the hope of more easily producing very high-quality films. And the team from Télécom ParisTech is continuing its research to even further reduce the amount of computing time required. Their objective: develop new light simulation calculation distribution methods using several low-capacity machines.

[divider style=”normal” top=”20″ bottom=”20″]

Illustration of BCD denoising a scene, before and after implementing the algorithm