Patrice Pajusco, IMT Atlantique, 5G

5G Will Also Consume Less Energy

5G is often presented as a faster technology, as it will need to support broadband mobile usage as well as communication between connected objects. But it will also have to use less energy in order to find its place within the current context of environmental transition. This is the goal of the ANR Trimaran and Spatial Modulation projects, run by Orange, and in association with IMT Atlantique and other academic actors.

 

Although it will not become a reality until around 2020, 5G is being very actively researched. Scientists and economic stakeholders are really buzzing about this fifth-generation mobile telephony technology. One of the researchers’ goals is to reduce the energy consumption of 5G communication. The stakes are high, as the development of this technology aims to be coherent with the general context of energy and environmental transition. In 2015, the alliance for next generation mobile networks, (NGMN) estimated thatin the next ten years, 5G will have to support a one thousand-fold increase in data traffic, with lower energy consumption than today’s networks.” This is a huge challenge, as it means increasing the energy efficiency of mobile networks by 2,000.

To achieve this, stakeholders are counting on the principle of focusing communication. The idea is simple: instead of transmitting a wave from an antenna in all directions, as is currently the case, it is more economical to send it towards the receiver. Focusing waves is not an especially new field of research. However, it was only recently applied to mobile communications. The ANR Trimaran project, coordinated by Orange and involving several academic and industrial partners[1] including IMT Atlantique, explored the solution between 2011 and 2014. Last November, Trimaran won the “Economic Impact” award at the ANR Digital Technology Meetings.

Also read on I’MTech: Research and Economic Impacts: “Intelligent Together”

 

In order to successfully focus a wave between the antenna and a mobile object, the group’s researchers have concentrated on a technique of time reversal: “the idea is to use a mathematical property: a solution to wave equations is correct, whether the time value is positive or negative” says Patrice Pajusco, telecommunications researcher at IMT Atlantique. He explains with an illustration: “take a drop of water. If you drop it onto a lake, it will create a ripple that will spread to the edges. If you reproduce the ripple at the edge of the lake, you can create a wave that will converge towards the point in the lake where the drop of water fell. The same phenomena will happen again on the lake’s surface, but with a reversed time value.

When applied to 5G, the principle of time reversal uses an initial transmission from the mobile terminal to the antenna. The mobile terminal transmits its position, sends an electromagnetic wave which spreads through the air, over hills, and is defined by the terrain, before arriving at the antenna with its echoes, and a specific profile of its journey. The antenna recognizes the profile and can send it in the opposite direction to meet the user’s terminal. The team at IMT Atlantique is especially involved in modeling and characterizing the communication channel that is created. “The physical properties of the propagation channel vary according to the echoes coming from several directions which are spaced more or less differently. They must be well-defined in order for the design of the communication system to be effective” Patrice Pajusco underlines.

 

Focusing also depends on antennas

Improving this technique also involves working on the antennas used. Focusing a wave on a standard antenna is not difficult, but focusing in on a specific antenna when there is another one nearby is problematic. The antennas must be spaced out for the technique to work. To avoid this, one of the partners in the project is working on new types of micro-structured antennas which make it possible to focus a signal over a shorter distance, therefore limiting the spacing constraint.

The challenge of focusing is so important that since January 2016, most of the partners in the Trimaran project have been working on a new ANR project called spatial modulation. “The idea of this new project is to continue to save energy, while transmitting additional information to the antennas”, Patrice Pajusco explains. Insofar as it is possible to focus on a specific antenna, this connection with the terminal represents information. “We will therefore be able to transmit several bits of information, simply by changing the focus of the antenna”, the researcher explains.

This new project brings on an additional partner, Centrale Supélec, an “expert in the field of spatial modulation”, Patrice Pajusco says. If conclusive, it could eventually provide a technology to compete with the MIMO antennas based on the use of many emitters and receivers to transmit a signal. “By using spatial modulation and focusing, we could have a solution that would be much less complex that the conventional MIMO system”, the researcher hopes. Focusing clearly has the capacity to bring great value to 5G. The fact that it can be applied to moving vehicles has already been judged as one of the most promising techniques by the H2020 METIS project, a reference in the European public-private partnership for 5G.

 

[1] The partners are Orange, Thalès, ATOS, Institut Langevin, CNRS, ESPCI Paris, INSA Rennes, IETR, and IMT Atlantique (formerly Télécom Bretagne and Mines Nantes).

Arago, IMT Atlantique, Jean-Louis de Bougrenet

Arago, technology platform in optics for the industrial sector

Belles histoires, bouton, CarnotArago is a technology platform specializing in applied optics, based at IMT Atlantique’s Brest campus. It provides scientific services and a technical environment for manufacturers in the optics and vision sectors. It has unique expertise in the field of liquid crystals, micro-optics and health.

 

Are you in the industrial sector or a small business wanting to experiment with a disruptive innovation? Perhaps you are looking for innovative technological solutions, but don’t have the time, or the technical means, to do so? The solution might be to try your luck with a technology platform.

Arago can help you to turn your new ideas into a reality. Arago was designed to foster innovation and technology transfer in research. It is home to highly-skilled scientists and technological facilities. Nexter, Surys and Valéo are among the companies who have already placed their trust in the platform, a component of Carnot Télécom and Société Numérique, specialized in the field of optics. Arago has provided them with access to a variety of high-end technology solutions for design, modeling, creating micro-optics, electro-optic functions based on liquid crystals, and composite materials for vision and protection. The fields of application are varied, ranging from protective and corrective glasses to holographic marking. It all involves a great deal of technical expertise, which we discussed step by step with Jean-Louis de Bougrenet, researcher at IMT Atlantique and the creator of the Arago platform.

 

Health and impact measurement of new technologies

Health is a field which benefits directly from Arago’s potential. The platform can directly count on the Health Interest Group 3D Fovéa, which includes IMT Atlantique, the University Hospital of Brest, and INSERM, and which led to the creation of the spinoff Orthoptica (the French leader in digital orthoptic tools). Thanks to Orthoptica, the platform was able to set up Binoculus, an orthoptic platform.

Arago, IMT Atlantique, Jean-Louis de BougrenetThis operates as a digital platform and aims to replace the existing tools used by orthoptists, which are not very ergonomic. This technology consists solely of a computer, a pair of 3D glasses, and a video projector. It makes the technology more widely available by reducing the duration of tests. The glasses are made with shutter lenses, allowing the doctor to block each eye in synchronization with the material being projected. This othoptic tool is used to evaluate the different degrees of binocular vision. It aims to help adolescents who have difficulties with fixation or concentration.

Acting for health now is worthwhile, but anticipating the needs of the future is even better. In this respect, the platform plays an important role for ANSES[1] in the evaluation of risks involved in deploying new immersive technology such as the Oculus Vive augmented reality headsets. “When the visual system is involved, an impact study with scientific and clinical criteria is essential”, explains Jean-Louis de Bougrenet. These evaluations are based on in-depth testing carried out on clinical samples, in liaison with hospitals such as Necker, HEGP (Georges Pompidou) and the University Hospital of Brest.

 

Applications in diffractive micro-optics and liquid crystal

At the same time, Arago has the expertise of researchers with several decades of experience in liquid crystal engineering (Pierre-Gilles de Gennes laboratory). “These materials present significant electro-optical effects, enabling us to modulate the light in different ways, at very low voltage”, explains Jean-Louis de Bougrenet.

The researchers have used liquid crystals for many industrial purposes (protection goggles, spectral filters, etc.). In fact, liquid crystals will soon be part of our immediate surroundings without us even knowing. They are used in flat screens, 3D and augmented reality goggles, or as a camouflage technique (Smart Skin). To create these possibilities, Arago’s manufacturing and testing facilities are unique in France (including over 150m² of cleanrooms).

Other objects are also omnipresent in our daily lives, and even involve our smartphones: diffractive micro-optics. One of their specific features is that they exist in different sizes. Arago has all the tools necessary to design and produce these optics at different scales both individually and collectively, with easily-industrialized nano-imprint duplication processes. “We use micro-optics in many fields, for example manufacturing security holograms, biometric recognition, quality control and the automobile sector”, explains Jean-Louis de Bougrenet. Researchers recently set up a so-called two-photon photopolymerization technique, which allows the direct fabrication of fully 3D nanostrutures.

 

European ambitions

Arago is also involved in many other projects. Since 2016, it has hosted an IRT BCom platform. This platform is dedicated to creating very high-speed optical transmission systems in free space, for wireless connections for virtual reality headsets in environments such as the Cave Automatic Virtual Environment (CAVE).

Arago is already firmly established in Brittany, having recently finalized a technology partnership with the INL (International Iberian Nanotechnology Laboratory, a European platform similar to CERN). The partnership involves pooling resources, privileged access to technology, and the creation of joint European projects. The INL is unique in the European field of nanoscience and nanotechnology. Arago contributes the complementary material components it had been lacking. For Jean-Louis de Bougrenet, “in the near future, Arago will become part of the European technology cluster addressing new industrial actors, by adding to our offering and more easily integrating European programs with a sufficient critical mass. This partnership will enable us to develop our emerging activity in the field of intelligent sensors for the environment and biology”.

 

 [1] ANSES: French Agency for Food, Environmental and Occupational Health & Safety

Some examples of products developed through Arago

 

[one_half][box]

Night-vision driving glasses

These glasses are designed for driving at night, and prevent the driver being dazzled by car lights. They were designed in close collaboration with Valéo.

These glasses use two combined effects: absorption and reflection. This is possible due to a blend of liquid crystals and absorbent material, which creates a variable density. This technology is the result of several years’ research. Several patents have been registered, and have been licensed to the industrial partner.

[/box][/one_half]

[one_half_last][box]

Holographic marking to fight counterfeiting

This marking was entirely designed by Arago. Micro-optics are integrated in banknotes, for example, to fight counterfeiting. The come in the form of holographic films or strips. Their manufacture uses a copying system which reduces production costs, making mass production possible. The work was carried out in close collaboration with industrial partner Surys. Patents have been registered and transferred. The project also led to a copying machine being created for the industrial partner. The machine is currently being used by the company.

[/box][/one_half_last]

[box type=”shadow” align=”” class=”” width=””]

The TSN Carnot institute, a guarantee of excellence in partnership-based research since 2006

Having first received the Carnot label in 2006, the Télécom & Société numérique Carnot institute is the first national “Information and Communication Science and Technology” Carnot institute. Home to over 2,000 researchers, it is focused on the technical, economic and social implications of the digital transition. In 2016, the Carnot label was renewed for the second consecutive time, demonstrating the quality of the innovations produced through the collaborations between researchers and companies. The institute encompasses Télécom ParisTech, IMT Atlantique, Télécom SudParis, Télécom, École de Management, Eurecom, Télécom Physique Strasbourg and Télécom Saint-Étienne, École Polytechnique (Lix and CMAP laboratories), Strate École de Design and Femto Engineering.[/box]

Television, NexGenTV, Eurecom, Raphaël Troncy

The television of the future: secondary screens to enrich the experience?

The television of the future is being invented in the Eurecom laboratories, at Sophia-Antipolis. This is not about creating technologies for even bigger, sharper screens, but rather reinventing the way TV is used. In a French Unique Interministerial Fund (FUI) project named NexGenTV, researchers, broadcasters and developers have joined forces to achieve this goal. The project was launched in 2015 for a duration of three years, and is already showing results. Raphaël Troncy is a Eurecom researcher in data sciences who is involved in the project. He presents the progress that has been made to date and the potential for improvement, primarily based on enriching content with a secondary screen.

 

With the NexGenTV project, you are trying to reinvent the way we use TV. Where did this idea come from?

Raphaël Troncy: This is a fairly widespread movement in the audiovisual sector. TV channels are realizing that people are using their television screens less and less to watch their programs. They watch them through other modes, such as replay, or special mobile phone apps, and do other things at the same time. TV channels are pushing for innovative applications. The problem is that nobody really knows what to do, because nobody knows what users want. At Eurecom, we worked on an initial project, financed by the FP7 European program called LinkedTV. We worked with users to find out what they want, and what the channels want. Then with NexGenTV we focused on applications for a second screen, like a tablet, to offer enriched content to viewers, as well as affording TV channels the ability to maintain editorial content.

 

Although the project won’t be completed until next year, have you already developed promising applications?

RT: Yes, our technology has been used since last summer by the tablet app for the beIN Sports channels. The technology allows users to automatically select the highlights and access additional content for Ligue 1 football matches. Users can access events such as goals or fouls, they can see who was touching the ball at a given moment, or statistics on each player, all in real time. We are working towards offering information such as replays of similar goals by other players in the championship, or goals in previous matches by the player who has just scored.

 

In this example, what is your contribution as a researcher?

RT: The technology we have developed opens up several possibilities. Firstly, it collects and formats the data sent by service providers. For example, the number of kilometers a player covers, or images of the goals. This is challenging, because live conditions mean that this needs to happen within the few seconds between the real event and the moment it is broadcast to the user. Secondly, the technology performs semantic analysis, extracting data from players’ sites, official FIFA or French Football Federation sites, or Wikipedia, to provide a condensed version to the TV viewer.

 

 

Do you also perform image analysis, for example to enable viewers to watch similar goals?

RT: We did this for the first prototypes, but we realized that the data provided were rich enough already. However, we do analyze images for another use: the many political debates that are happening at present during the election period. There is not yet an application for this, we are developing it. But we practiced on the debates for the two primary elections, and we are carrying on this practice for the current and upcoming debates for the presidential and legislative elections. We would like to be able to put an extract of a candidate’s previous speech on the tablet while they are talking about a particular subject. Either because what they are saying is complementary, contradictory, or linked to a proposition that is relevant to their program. We also want to be able to isolate the “best moments” based on parallel activity on Twitter, or on a semantic analysis of the candidates’ speeches, and offer a condensed summary.

 

What is the value of image analysis in this project?

RT: For replays, image analysis allows us to better segment a program, to offer the viewer a frame of reference. But it also provides access to specific information. For example, during the last debate of the right-wing primary election, we measured the on-screen presence time of the candidates, using facial recognition programmed by deep learning. We wanted to see if there was a difference in the way the candidates were treated, or if there was an equal balance, as is the case with speaking time controlled by the CSA (French institution for media regulation). We realized that broadcasters’ choice was more heavily weighted towards Nicolas Sarkozy than the other candidates. This can be explained, because he was strongly challenged by the other candidates, and so he was focused on when he didn’t speak. But this also demonstrates how an image recognition application can provide viewers with keys to interpreting programs.

 

The goal of your technology is to give even more context, and inform the user?

RT: Not necessarily, we also have an example of use with an educational program broadcast on France Télévisions. In this case, we wanted to provide viewers with quizzes, to provide educational material. We are also working on adapting advertising to users for replay viewing. The idea is to make the most of the potential of secondary screens to improve the user’s experience.

 

[divider style=”normal” top=”20″ bottom=”20″]

NexGenTV: a consortium for inventing the new TV

The NexGenTV project combines researchers from Eurecom and the Irisa joint research unit (co-supervised by IMT). The consortium also includes companies from the audiovisual sector. Wildmoka is in charge of creating applications for secondary screens, along with Envivio (taken over by Ericsson) and Avisto. The partners are working in collaboration with an associated broadcasters club, who provide the audiovisual content required for creating the applications. The club includes France Télévisions, TF1, Canal+, etc.

[divider style=”normal” top=”20″ bottom=”20″]

cybersécurité, détecter, université paris saclay, detect

Cybersecurity: Detect and Conquer

Researchers from Université Paris-Saclay members are developing algorithms and visual tools to help detect and counteract cybersecurity failures.

 

You can’t fight what you can’t see. The protection of computer systems is a growing concern, with an increasing number of smart devices gathering our private data. Computer security has to cover hardware as well as software vulnerabilities, including network access. It needs to offer efficient countermeasures.But the first step to cybersecurity is to detect and identify intrusions and cyberattacks.

Usual attacks have adverse effects on the availability of a service (Denial of Service), try to steal confidential information or to compromise the service’s behavior by modifying the flow of events produced during an execution (that is, adding, removing or modifying events). They are difficult to detect in a highly distributed environment (like the cloud or e-commerce applications), where the order of the observed events is partially unknown.

Researchers from CentraleSupélec designed a new approach to tackle this issue. They used an automaton, modeling the correct behavior of a distributed application, and a list of temporal properties that the computation must comply with in any execution (“is always or never followed by”, “always precede”, etc.). The automaton is then able to generalize the model from a finite (thus incomplete) set of behaviors. It also avoids introducing incorrect behaviors in the model in the learning phase. Combining these two types of methods (automaton and list), the team managed to lower the rate of false positives (down to 2% in certain cases) and the mean time necessary to detect an intrusion (less than one second).

Another team from the same UPSaclay member chose a different approach. Researchers designed an intuitive visualization tool that helps to easily and automatically manage security alerts. Cybersecurity mechanisms raise large quantities of alerts, many of them being false-positive. VEGAS, for “Visualizing, Exploring and Grouping AlertS”, is a customizable filter system. It offers the front-line security operator (in charge of dispatching the alerts to security analysts) a simple 2D representation of the original dataset of alerts they receive. Alerts that are close in the original dataset are still close in the computed representation, while alerts that are distant stay distant. The officer can then select alerts that visually appear to belong to the same group, i.e. similar alerts, to generate a new rule to be inserted in the dispatching filter. That way, the amount of alerts the front-line security operator receives is reduced and security analysts only get the alerts they need to investigate further.

Those analysts could then use another visualization tool developed by a team from CNRS and Télécom SudParis to calculate the impact of cyber attacks and security countermeasures. Here, systems are given coordinates in multiple spatial, temporal and context dimensions. For instance, in order to access a web-server (resource) of a given organization, an external user (user account) connects remotely (spatial condition) to the system by providing their login and password (channel) at a given date (temporal condition).

In this geometrical model, an attack that compromises some resources using a given channel will be represented as a surface (square or rectangle). If it also compromises some users, it will be a parallelepiped. On the contrary, if we only know which resources are compromised, the attack will only affect one axis of the representation and be a line.

 

“Cybersecurity mechanisms raise large quantities of alerts.”

Researchers then geometrically determine the portion of the service that is under attack and the portion of the attack controlled by a given security measure. They can automatically calculate the residual risk (the percentage of the attack that is left untreated by any countermeasure) and the potential collateral damage (the percentage of the service that is not under attack but is affected by a given countermeasure). Such figures allow security administrators to compare the impact of multiple attacks and/or countermeasures in complex attack scenarios. Administrators are able to measure the size of cyber events, identify vulnerable elements and quantify the consequences of attacks and measures.

But what if the attack takes place directly in the hardware? Indeed, when outsourcing their circuits, companies can not be assured that no malicious circuit, such as a hardware trojan horse, has been introduced. Researchers from Télécom ParisTech proposed a metric to measure the impact of the size and location of a trojan: using this metric, there is a probability superior to 99% (with a false negative rate of 0.017%) of detecting a hardware trojan bigger than 1% of the original circuit.

More recently, the same team designed a sensor circuit to detect “electromagnetic injection”, an intentional fault injection utilized to steal secret information hidden inside integrated circuits. This sensor circuit has a high fault detection coverage and a small hardware overhead. So maybe you can fight what you can’t see, or at least you can try to. You just have to be prepared!

 

References
Gonzalez-Granadillo et al. A polytope-based approach to measure the impact of events against critical infrastructures. Journal of Computer and System Sciences, Volume 83, Issue 1, February 2017, Pages 3-21
Crémilleux et al. VEGAS: Visualizing, exploring and grouping alerts. NOMS 2016 – 2016 IEEE/IFIP Network Operations and Management Symposium, Istanbul, 2016, pp. 1097-1100
Totel et al. Inferring a Distributed Application Behavior Model for Anomaly Based Intrusion Detection. 2016 12th European Dependable Computing Conference (EDCC), Gothenburg, 2016, pp. 53-64
Miura et al. PLL to the rescue: A novel EM fault countermeasure. 2016 53nd ACM/EDAC/IEEE Design Automation Conference (DAC), Austin, TX, 2016, pp.

 

[divider style=”normal” top=”5″ bottom=”5″]

The original version of this article has been published in l’Édition of l’Université Paris Saclay.

 

supply chain management, SCM, industrial engineering, Mines Albi, génie industriel

Supply chain management: Tools for responding to unforeseen events

The inauguration of IOMEGA took place on march 14, 2017. This demonstration platform is designed to accelerate the dissemination of contributions by Mines Albi researchers in the industrial world, particularly concerning their expertise in Supply Chain Management. Matthieu Lauras, an industrial engineering researcher, is already working on tools to help businesses and humanitarian agencies manage costs and respond to unforeseen events.

 

First there were Fordism and Toyotism and then came Supply Chain Management (SCM). So much for rhyming. During the 1990s businesses were marked by globalization of trade and offshoring. They began working in networks and relying on information and communication technologies which were constantly changing. It was clear that the industrial organization used in the past would no longer work. Researchers named this revolution Supply Chain Management.

Twenty-five years later SCM has come a long way and has become a discipline in its own right. It aims to manage all of the various flows (materials, information, cash) which are vital to a business or a network of businesses.

Supply Chain Management today

Supply chain management considers the entire network: from suppliers to final users of a product (or service). Matthieu Lauras, an Industrial Engineering researcher at Mines Albi, gives an example. “For the yogurt supply chain, there are the suppliers of raw materials (milk, sugar, flour…) then purchasing of containers to make the cups and boxes, etc.” Supply chain management coordinates all these various flows in order to manufacture products on schedule and deliver them to the right place, in keeping with the planned budget.

SCM concerns all sectors of activity from the manufacturing industry to services. It has become essential to a business’s performance and management. But there is room for improvement. Up until now, the tools created have been devoted to cost control. “The competitiveness problem that businesses are currently facing is no longer linked to this element. What now interests them is their ability to detect disruptions and react to them. That’s why our researchers are focusing on supply chain agility and resilience,” explains Matthieu Lauras. At Mines Albi, researchers are working on improving SCM tools using a blend of IT and logistics skills.

Applied research to better handle unforeseen events

A number of elements can disrupt the proper functioning of supply chains. On one hand, markets are constantly changing, making it difficult to estimate production volumes. On the other hand, globalization has made transport subject to greater variations. “The strength of a business lies in its ability to handle disruptions,” notes Matthieu Lauras. This is why researchers are developing new tools which are better suited to these networks. “We are working on detecting differences between what was planned and what is really happening. We’re also developing decision-making support resources in order to enhance decision-makers’ ability to adapt. This helps them take corrective action in order to react quickly and effectively to unforeseen events,” explains the researcher.

As a first step, researchers are concentrating on the resistance and resilience of the network. They have set up research designs based on simulations of disruptions in order to evaluate the chain’s response to these events. Modeling makes it possible to test different scenarios and evaluate the impact of a disruption according to its magnitude and location in the supply chain. “We are working on a concrete case as part of the Agile Supply Chain Chair with Pierre Fabre. For example, this involves evaluating if a purchaser’s network of suppliers would potentially be able to face significant variations in demand. It is also important to determine if the purchaser could maintain his activity in acceptable conditions in the event of a sudden default of one of these partners,” explains Matthieu Lauras

New technology for real-time monitoring of supply chains

Another area of research is real-time management. “We use connected devices because they allow us to obtain information at any time about the entire network. But this information arrives in a jumble…that’s why we are working on tools based on artificial intelligence to help ‘digest’ it and pass on only what is necessary to the decision-maker,” says the researcher.

In addition, these tools are tested through collaborations with businesses and final users. “Using past data, we observe the level of performance of traditional methods in a disrupted situation. Performance is measured in terms of fill percentage, cycle time (time it takes between a certain step and delivery for example), etc. Then we simulate the performance we would obtain using our new tools. This allows us to measure the differences and demonstrate the positive impact,” explains Matthieu Lauras.

Industry partners then provide the opportunity to conduct field experiments. If the results are confirmed, partners like Iterop carry out the necessary development of commercial devices which then serve a wider range of users. Founded in 2013 by two former Mines Albi PhD students, the start-up Interopsys develops and markets software solutions for simplifying the collaboration between the personnel of a company and their information system.

A concrete example: The Red Cross

Mines Albi researchers are working on determining strategic locations around the world for the Red Cross to pre-position supplies, thus enabling the organization to respond to natural disasters more quickly. Unlike businesses, humanitarian agencies do not strive to make a profit but rather to control costs. This gives them a greater scope of action and allows them to take action in a greater number of operational areas for the same cost.

Matthieu Lauras explains: “Our research has helped reorganize the network of warehouses used by this humanitarian agency. When a crisis occurs, it must be able to make the best choices for the necessary suppliers and mode of transport. However, it does not currently have a way to measure the pros and cons of these different modes. For example, it focuses on its list of international suppliers but does not consider local suppliers. So we have decision-making support tools for planning and taking action in the short term in order to make decisions in an emergency situation.

But is it possible to transpose techniques from one sector to another? Naturally, researchers have identified this possibility, which is referred to as cross-learning. Supply chains in the humanitarian sector already function with agility, while businesses control costs. “We take the best practices from one sector and use them in another. The difficulty lies in successfully adapting them to very different environments,” explains Matthieu Lauras. In both cases, this applied research has proven to be successful and will only continue to expand in scope. The arrival of the IOMEGA platform should help researchers perform practical tests and reduce the time required for implementation.

 

[box type=”shadow” align=”” class=”” width=””]

IOMEGA: Mines Albi’s industrial engineering platform

This platform, which was inaugurated on March 14, 2017, makes it possible for Mines Albi to demonstrate tools for product design and configuration as well as for designing information systems for crisis management, risk management for projects and supply chain management.

Most importantly it offers decision-making support tools for complex and highly collaborative environments. For this, the platform benefits from experiment kits for connected devices and computer hardware with an autonomous network. This technology makes it possible to set up experiments under the right conditions. An audiovisual system (video display wall, touchscreen…) is also used for the demonstrations. This helps potential users immerse themselves in configurations that mimic real-life situations.

IOMEGA was designed to provide two spaces for scenario configuration on which two teams may work simultaneously. One uses conventional tools while the other tests those from the laboratory.

A number of projects have already been launched involving this platform, including the Agile Supply Chain Chair in partnership with Pierre Fabre, the AGIRE joint laboratory dedicated to the resilience of businesses in association with AGILEA (a supply chain management consulting firm). Another project is a PhD dissertation on the connected management of flows of urgent products with the French blood bank (EFS). In the long term, IOMEGA should lead to new partnerships for Mines Albi. Most importantly, it strives to accelerate the dissemination of researchers’ contributions to the world of industry and users.

© Mines Albi

[/box]

Les chercheurs, cyber-remparts des infrastructures critiques, critical infrastructures, cyber protection

Researchers, the cyber-ramparts of critical infrastructures

Cyber protection for nuclear power stations or banks cannot be considered in the same way as for other structures. Frédéric Cuppens is a researcher at IMT Atlantique and leader of the Chair on Cybersecurity of Critical Infrastructures. He explains his work in protecting operators whose correct operation is vital for our country. His Chair was officially inaugurated on 11 January 2017, strengthening state-of-the-art research on cyberdefense.

 

The IMT chair you lead addresses the cybersecurity of critical infrastructures. What type of infrastructures are considered to be critical?

Frédéric Cuppens: Infrastructures which allow the country to operate correctly. If they are attacked, failure could place the population at risk or be seriously damaging to the execution of essential services for citizens. A variety of domains are covered by the operators of these infrastructures’ activities, and this diversity is illustrated in our Chair’s industrial partners, which include stakeholders in energy generation and distribution — EDF; telecommunications — Orange and Nokia; defense — Airbus Defence and Space. There are also sectors which are perhaps initially less obvious but which are just as important such as banks and logistics, and for this we are working with La Société Générale, BNP Paribas and La Poste[1].

Also read on I’MTech: The Cybersecurity of Critical Infrastructures Chair welcomes new partner Société Générale

 

The Chair on Cybersecurity of Critical Infrastructures is relatively recent, but these operators did not wait until then to protect their IT systems. Why are they turning to researchers now?

FC: The difference now is that more and more automatons and sensors in these infrastructures are connected to the internet. This increases the vulnerability of IT systems and the severity of potential consequences. In the past, an attack on these systems could crash internal services or slow production down slightly, but now there is a danger of major failure which could put human lives at risk.

 

How could that happen in concrete terms?

FC: Because automatons are connected to the internet, an attacker could quite conceivably take control of a robot holding an item and tell it to drop it on someone. There is an even greater risk if the automaton handles explosive chemical substances. Another example is an attack on a control system: the intruder could see everything taking place and send false information. In a combination of the two examples, the danger is very great: an attacker could take control of whatever he wants and make it impossible for staff members to react by preventing control.

 

How do you explain the vulnerability of these systems?

FC: Systems in traditional infrastructures, like the ones in question, were computerized a long time ago. At this point, they were isolated from an IT point of view. As they weren’t designed to be connected to the internet, the security must now be updated. Today, cameras or automatons can still possess vulnerabilities because their primary function is to film and handle objects and not necessarily be resistant to every possible kind of attack. This is why our role is first and foremost to detect and understand the vulnerabilities of these tools depending on their cases of use.

 

Are cases of use important for the security of a single system?

FC: Of course, and measuring the impact of an attack according to an IT system’s environment is at the core of our second research focus. We develop adapted measurements to identify the direct or potential consequences of an attack, and these measurements will obviously have different values depending on whether an attacked automaton is on a military boat or in a nuclear power station.

 

In this case, can your work with each partner be reproduced for protecting other similar infrastructures, or is it specific to each case?

FC: There are only a limited number of automaton manufacturers for critical applications: there must be 4 or 5 major suppliers in the world. The cases of use do of course have an effect on the impact of the intrusion, but the vulnerabilities remain the same. Part of what we do can therefore be reproduced, of course. On the other hand, we have to be specific with regard to the measurement of impact. The same line is taken by ordering institutions in research. The projects of Investments for the Future programs on both the French national and European H2020 scale strongly encourage us to work on specific cases of use. That said, we still sometimes address topics that are not linked to a particular case of use, but which are more general.

 

The new topics that the Chair plans to address include a project called Cybercop 3D for visualizing attacks in 3D. It seems a rather unusual concept at first sight.

FC: The idea is to improve control tools, which are currently similar to a spreadsheet with different colored lines to facilitate visualizing the data on the condition of the IT system. We could use 3D technology to allow computer engineers to view a model of places where intrusions are taking place in real time, and improve the visibility of correlations between events. This would also provide a better understanding of attack scenarios, which are currently presented as 2D tree views and quickly become unreadable. 3D technology could improve their readability.

 

The issue in hand is therefore to improve the human experience in measures taken against the attacks. What is the importance of this human factor?

FC: It is vital. As it happens, we are planning to launch a research topic on this subject by appointing a researcher specializing in qualitative psychology. This will be a cross-cutting topic, but will above all complement our third focus which develops decision-making tools to provide the best advice for people in charge of rolling out countermeasures in the event of an attack. The aim is to see whether, from a psychological point of view, the solution proposed to humans by the decision-making tool will be interpreted correctly. This is important because, in this environment, staff are used to managing accidental failure and do not necessarily respond by thinking it is a cyber attack. It is therefore necessary to make sure that when the decision-making tool proposes something, it is understood correctly. This is all the more important given that the operators of critical systems do not follow an automatization rationale. It is still humans who control what happens.

 

[1] In addition to the operators of critical infrastructures mentioned, the Chair’s partners also include Amossys, a company specializing in cybersecurity expertise. In addition, there are institutional partners with Région Bretagne and FEDER, Fondation Télécom and IMT’s schools: IMT Atlantique, Télécom ParisTech and Télécom SudParis.

 

 

 

GDPR, chair Values and policies of personal information

Personal data: How the GDPR is changing the game in Europe

The new European regulation on personal data will become officially applicable in May 2018. The regulation, which complements and strengthens a European directive from 1995, guarantees unprecedented rights for citizens, including the right to be forgotten, the right to data portability, and the right to be informed of security failures in the event of a breach involving personal data… But for these measures to be effective, companies in the data sector will have to be in agreement. However, they have little time to comply with this new legislation that, for most companies, will require major organizational changes. Failure to make these changes will expose them to the risk of heavy sanctions.

 

With very little media coverage, the European Union adopted the new General Data Protection Regulation (GDPR) on April 27, 2016. Yet this massive piece of legislation, featuring 99 articles, includes plenty of issues that should arouse the interest of European citizens. Because, starting on May 25, 2018, when the regulation becomes officially applicable in the Member States, users of digital services will acquire new rights: the right to be forgotten, in the form of a right to be dereferenced, an increased consideration of their consent to use or not use their personal data, increased transparency on the use of this data… And the two-year period, from the moment the regulation was adopted to the time of its application, is intended to enable companies to adapt to these new constraints.

However, despite this deferment period, Claire Levallois-Barth, coordinator of the IMT chair Values and policies of personal information (VPIP) assures us that “two years is a very short period”. The legal researcher bases this observation on the work she has carried out among the companies she interviewed. Like many stakeholders in the world of digital technology, they find themselves facing new concepts introduced by the GDPR. Starting in 2018, for example, they must ensure their customers’ right to data portability. Practically speaking, each user of a digital service will have the option of taking his or her personal data to a competitor, and vice versa.

Claire Levallois-Barth, coordinatrice de la chaire VPIP.

Claire Levallois-Barth, coordinator of the chair Values and policies of Personal information

Two years does not seem very long for establishing structures that will enable customers to exercise this right to data portability. Because, although the regulation intends to ensure this possibility, it does not set concrete procedures for accomplishing this: “therefore, it is first necessary to understand what is meant, in practical terms, by a company ensuring its customers’ right to data portability, and then define the changes that must be made, not only in technical terms, but also in organizational terms, including the revision of current procedures and even the creation of new procedures,” explains Claire Levallois-Barth.

The “privacy by design” concept, which is at the very heart of the GDPR, and symbolizes this new way of thinking about personal data protection in Europe, is just as restricting for organizations. It requires the integration of all of the principles that govern the use of personal data (principles of purpose, proportionality, duration of data storage, transparency…) in advance, beginning at the design phase for a product or service. Furthermore, the regulation is now based on the principle of responsibility, which implies that the company itself must be able to prove that it respects this legislation by keeping updated proof of its compliance. The design phases for products and services, as well as the procedures for production and use must therefore be revised in order to establish internal governance procedures for personal data. According to Claire Levallois-Barth, “for the most conscientious companies, the first components of this new governance were presented to the executive committee before the summer of 2016.

 

Being informed before being ready

While some companies are in a race against time, others are facing problems that are harder to overcome. During the VPIP Chair Day held last November 25th, dedicated to the Internet of things, Yann Padova, the Commissioner specializing in personal data protection at the French Energy Regulatory Commission (CRE), warned that “certain companies do not yet know how to implement the new GDPR regulations.” Not all companies have access to the skills required for targeting the organizational levers that must be established.

For example, the GDPR mentions the requirement, in certain cases, for a company that collects or processes users’ data, to name a Data Protection Officer (DPO). This expert will have the role of advising the data controller—in other words, the company—to ensure that it respects the new European regulation. But depending on the organization of major groups, some SMEs will only play a subcontracting role in data processing: must they also be prepared to name a DPO? The companies are therefore faced with the necessity of quickly responding to many questions, and clear-cut answers do not always exist. And another reality is even more problematic: some companies are not at all informed of the contents of the GDPR.

Yann Padova, commissaire à la CRE.

Yann Padova, CRE Commissioner

Yann Padova points out that before they can be ready, companies must be aware of the challenges. Yet he recognizes that he “does not see many government actions in France that explain the coming regulations.” Joining him to discuss this subject on November 25, lawyer Denise Lebeau-Marianna—in charge of personal data protection matters at the law firm of Baker & McKenzie—confirmed this lack of information, and not only in France. She cited a study on companies’ readiness for the GDPR that was carried out by Dimensional Research and published in September 2016. Out of 821 IT engineers and company directors in the data sector, 31% had heard about the GDPR, but were not familiar with its contents, and 18% had never heard of it.

 

Without sufficient preparation, companies will face risks… and sanctions

For Claire Levallois-Barth, it seems obvious that with all of these limits, not all companies will comply with all aspects of the GDPR by 2018. So, what will happen then? “The GDPR encourages companies to implement protection measures that correspond to the risk level their personal data processing activities present. It is therefore up to companies to quantify and assess this risk. They then must eliminate, or at least reduce the risks in some areas, bearing in mind that the number of data processing operations is in the tens or even hundreds for some companies,” she explains. What will these areas be? That depends on each company, what it offers its users and its ability to adapt within two years.

And if these companies are not able to comply with the regulations in time, they will be subject to potential sanctions. One of the key points of the GDPR is an increase in fines for digital technology stakeholders that do not comply with their obligations, especially regarding user rights. In France, the CNIL could previously impose a maximum penalty of €150,000 before the Law for a Digital Republic increased this amount to €3 million. But the GDPR, a European regulation with direct application, will repeal this part of French regulation in May 2018, imposing penalties of up to €20 million euros or 4% of a company’s total annual worldwide turnover.

The new European Committee for data protection—currently called G29—will be in charge of organizing this regulation. This organization, which combines all of the European Union CNILs, has just published its first three notices on the regulation issues that require clarification, including portability and the DPO. This should remove some areas of uncertainty surrounding the GDPR, the biggest of which remains the question of the GDPR’s real, long-term effectiveness.

Because, although in theory the regulation proposed by the EU is aimed at better protecting users’ personal data in our digital environment, and at simplifying administrative procedures, many points still seem unclear. “Until the regulation has come into effect and the European Commission has published the implementing acts presenting the regulation, it will be very difficult to tell whether the protection for citizens will truly be reinforced,” Claire Levallois-Barth concludes.

 

 

Carnot TSN, Scalinx, electronics

Scalinx: Electronics, from one world to another

Belles histoires, bouton, CarnotThe product of research carried out by its founder, Hussein Fakhoury, at the Télécom ParisTech laboratories (part of the Télécom & Société numérique Carnot institute), Scalinx is destined to shine as a French gem in the field of electronics. By developing a new generation of analog-to-digital converters, this startup is attracting the attention of stakeholders in strategic fields such as the defense and space sectors. These components are found in all electronic systems that interface analog and digital functions, whose performance depends on the quality of the converters they use.

 

We live in an analog world, whereas machines exist in a digital world,” Hussein Fakhoury explains. According to this entrepreneur, founder of the startup Scalinx, all electronic systems must therefore feature a component that can transform analog magnitudes into digital values. “This converter plays a vital role in enabling computers to process information from the real world,” he insists. Why is this? It makes it possible to transform a value that is continually changing over time, like electrical voltage, into digital data that can be processed by computer systems. And designing this interface is precisely what Hussein Fakhoury’s startup specializes in.

Scalinx develops next generation analog-to-digital converters. Based on a different architectural approach than that used by its competitors, the components it has developed offer many advantages for applications that require a fast digitization system. “By using a new electronic design for the structure, we provide a much more compact solution that consumes less energy,” the startup founder explains. However, he points out that the Scalinx interfaces “are not intended to replace the historical architectural in every circumstance, since these historical structures are essential for certain applications.

Hussein Fakhoury, the founder of Scalinx

These new converters are intended for specific markets, in which the performance and the efficient use of space are of upmost importance. This is the case in the space electronics, defense, and medical imaging sectors. For this last example, a prime example is ultrasound. While today we can see the fetus in a woman’s womb in two dimensions using ultrasound technology, medical imaging is increasingly moving towards 3D visualization. However, to transition from 2D to 3D, probes must be used that use more converters. With the traditional architectures, the heat dissipation would become too great, and would not only damage the probe, but could inconvenience the patient.

And the obstacles are not only of a technical nature; they are also strategic. The quality of an electronic system depends on this analog/digital interface. Quality is therefore of utmost importance for high-end systems. Currently, however, “the global leaders for high-performance components in this field are American,” Hussein Fakhoury observes. Yet the trade regulations, as well as issues of sovereignty and confidentiality of use can represent a limit for European stakeholders in critical areas like the defense sector.

 

A spin-off from Télécom ParisTech set to conquer Europe

Scalinx therefore wants to become a reference in France and Europe for converters intended for applications that cannot sacrifice energy consumption for the sake of performance. For now, the field appears to be open. “Few companies want to take on this strategic market,” the founder explains. The startup’s ambition seems to be taking shape, since it benefited from two consecutive years of support from Bpifrance as the winner of the national i-Lab contest for business start-up assistance in 2015 and 2016. It also received an honor loan from The Fondation Télécom in 2016.

Scalinx’s level of cutting-edge technology in the key area of analog-digital interfaces can be attributed to the fact that its development took place in an environment conducive to state-of-the-art innovation. Hussein Fakhoury is a former Télécom ParisTech researcher (part of the Télécom & Société numérique Carnot institute), and his company is a spin-off that has been carefully nurtured to maturity. “Already in 2004, when I was working for Philips, I thought the subject of converters was promising, and I began my research work in 2008 to improve my technical knowledge of the subject,” he explains.

Then, between 2008 and the creation of Scalinx in 2015, several partnerships were established with industrial stakeholders, which resulted in the next generation of components that the startup is now developing. NXP — the former Philips branch specialized in semiconductors—France Télécom (now Orange) and Thalès collaborated with the Télécom ParisTech laboratory to develop the technology that is today being used by Scalinx.

With this wealth of expertise, the company is now seeking to develop its business and acquire new customers. Its business model is based on a “design house” model, as Hussein Fakhoury explains: “The customers come to see us with detailed specifications or with a concept, and we produce a turnkey integrated circuit that matches the technical specifications we established together.” This is a concept the founder of Scalinx hopes to further capitalize on as he pursues his ambition of European conquest, an objective he plans to meet over the course of the next five years.

 

[box type=”shadow” align=”” class=”” width=””]

The TSN Carnot institute, a guarantee of excellence in partnership-based research since 2006

Having first received the Carnot label in 2006, the Télécom & Société numérique Carnot institute is the first national “Information and Communication Science and Technology” Carnot institute. Home to over 2,000 researchers, it is focused on the technical, economic and social implications of the digital transition. In 2016, the Carnot label was renewed for the second consecutive time, demonstrating the quality of the innovations produced through the collaborations between researchers and companies. The institute encompasses Télécom ParisTech, IMT Atlantique, Télécom SudParis, Télécom, École de Management, Eurecom, Télécom Physique Strasbourg and Télécom Saint-Étienne, École Polytechnique (Lix and CMAP laboratories), Strate École de Design and Femto Engineering.[/box]

ISS, télécommunication spatiale, Space Telecommunication

What is space telecommunication? A look at the ISS case

Laurent Franck is a space telecommunications researcher at IMT Atlantique. These communication systems are what enable us to exchange information with far-away objects (satellites, probes…). These systems also enable us to communicate with the International Space Station (ISS). This is a special and unusual case compared to the better-known example of satellite television. The researcher explains how these exchanges between Earth and outer space take place.

 

Since his departure in November 2016, Thomas Pesquet has continued to delight the world with his photos of our Earth as seen from the sky. It’s a beautiful way to demystify life in space and make this profession—one that fascinates both young and old—more accessible. We were therefore able to see that the members of Expedition 51 aboard the ISS are far from lost in space. On the contrary, Thomas Pesquet was able to cheer on the France national rugby union team on a television screen and communicate live with children from different French schools (most recently on February 23, in the Gard department). And you too can follow this ISS adventure live whenever you want. But how is this possible? To shed some light on this issue, we met with Laurent Franck, a researcher in space telecommunications at IMT Atlantique.

 

What is the ISS and what is its purpose?

Laurent Franck: The ISS is a manned international space station. It accommodates international teams from the United States, Russia, Japan, Europe and Canada. It is a scientific base that enables scientific and technological experiments to be carried out in the space environment. The ISS is situated approximately 400 kilometers above the earth’s surface. But it is not stationary in the sky, because when something is in orbit at this altitude, the laws of physics make the object rotate at a faster speed than the Earth’s rotation. It therefore follows a circular orbit around our planet at a speed of 28,000 kilometers per hour, enabling it to orbit the Earth in 93 minutes.

 

How can we communicate with the ISS?

LF: Not by wire, that’s for sure! We can communicate directly, meaning between a specific point on Earth and the space station. To do this, it must be visible above us. We can get around this constraint by going through an intermediary. One or several satellites that are situated at a higher elevation can then be used as relays. The radio wave goes from the Earth to the relay satellite, and then to the space station, or vice versa. It is all quite an exercise in geometry. There are approximately ten American relay satellites in orbit. They are called TDRS (Tracking and Data Relay Satellite). Europe has a similar system called EDRS (European Data Relay System).

 

Why are these satellites located at a higher altitude than that of the space station?

LF: Let’s take a simple analogy. I take a flashlight and shine it on the ground. I can see a ring of light on the ground. If I raise the flashlight higher off the ground, this circle gets bigger. This spot of light represents the communication coverage between the ground and the object in the air. The ISS is close to the Earth’s surface, and therefore it only covers a small part of the Earth, and this coverage is moving. Conversely, if I take a geostationary satellite at an altitude of 36,000 kilometers, the coverage is greater and corresponds to a fixed point on the Earth. Not only are few satellites required in order to cover the Earth’s surface, but the ISS can also sustainably communicate, via the geostationary satellite, with a ground station that is also located within this area of coverage. Thanks to this system, only three or four ground stations are required to permanently communicate with the ISS.

 

Is live communication with the ISS truly live?

LF: There is a slight time lag, for two reasons. First, there is the time the signal takes to physically travel from point A to point B. This time is related to the speed of light. Therefore, it takes 125 milliseconds to reach a geostationary satellite (television or satellite relays). We then must add the distance between the satellite and the ISS. This results in a travel time that is incompressible–since it is physical–of a little over a quarter of a second. Or half a second to travel there and back. This first time lag is easily observable when we watch the news on television: the studio asks a question and the reporter on the ground seems to wait before answering, due to the time needed to receive the question via satellite and send the reply!

Secondly, there is a processing time, since the information travels through telecommunications equipment. This equipment cannot process the information at the speed of light. Sometimes the information is stored temporarily to accommodate the processor speed. It’s like when I have to wait in line at a counter. There’s the time the employee at the counter takes to do their job, plus the wait time due to all the people in line in front of me. This time can quickly add up.

We can exchange any kind of information with the ISS. Voice and image, of course, as well as telemetry data. This is the information a spacecraft sends to the earth to communicate its state of health. Included in this information is the station’s position, the data from the experiments carried out on board, etc.

 

What are the main difficulties the spatial telecommunications systems experience?

LF: The major difficulty is linked to the fact that we must communicate with objects that are very far away and have limited electrical transmission power. We record these constraints in an energy link budget. This involves several phenomena. The first is that the farther away we communicate, the more energy is lost. With the distance, the energy is dispersed like a spray. The second phenomenon involved in this budget is that the quality of communication depends on the amount of energy received at the destination. We ask: out of one million bits that are transmitted, how many are false when they arrive at the destination? Finally, the last point is the output rate that is possible for the communication. This also depends on the amount of energy invested in the communication. We often adjust the output rate to obtain a certain level of quality. It all depends on the amount of energy available for transmission. This is limited aboard the ISS, since it is powered via solar panels and sometimes travels in the Earth’s shadow. The relay satellites have the same constraints.

 

Is there a risk of interference when the information is travelling through space?

LF: Yes and no, because radio frequency telecommunications are highly regulated. The right to transmit is linked to a maximum frequency and power. It is also regulated in space: we cannot “spill over” into another nearby antenna. For space communications, there are tables that define the maximum amount of energy that we can send outside of the main direction of communication. Below this maximum level, the energy that is sent to a nearby antenna is of course interference, but it will not prevent it from functioning properly.

 

What are the other applications of communications satellites?

LF: They are used for Internet access, telephony, video telephony, the Internet of things… But what is interesting is what they are not used for: GPS navigation and weather observations, for example. In fact, space missions are traditionally divided into four components: the telecommunications we are discussing here, navigation/positioning, observation, and deep-space exploration like the Voyager probes. Finally, what is fascinating is that with a field as specialized as that of space, there is an almost infinite amount of even more specialized derivations.

 

OpenAirInterface, Eurecom, 5G

OpenAirInterface: An open platform for establishing the 5G system of the future

Belles histoires, bouton, CarnotIn this article, we continue our exploration of the Télécom & Société numérique Carnot institute technological platforms. OpenAirInterface is the platform created by EURECOM to support mobile telecommunication systems like 4G and 5G. Its goal: to develop access solutions for networks, radio and core networks. Its service is based on a software suite developed using open source.

 

The OpenAirInterface platform offers a 4G system built on a set of software programs. These programs can each be tested and modified individually by the user companies, independently of the other programs. The goal is to establish the new features of what will become the 5G network. To find out more, we talked with Christian Bonnet, a communications systems researcher at EURECOM.

 

What is OpenAirInterface?

Christian Bonnet: This name encompasses two aspects. The first is the implementation of the software that makes up a 4G-5G system. This involves software components that run in a mobile terminal – those that increment the radio transmissions, and those that are in the core network.

The second part of OpenAirInterface is an “endowment fund” created by EURECOM at the end of 2014, which is aimed at leading an open and global software Alliance (OSA – OpenAirInterface Software Alliance).

 

How does this software suite work?

CB: The aim is to implement the software components required for a complete 4G system. This involves the modem of a mobile terminal, the software for radio relay stations, as well as the software for the specific routers used for a network core. Therefore, we deal with all of the processes involved in the radio layer (modulation, coding, etc.) of communication protocols. It runs on the Intel x86 processors that are found in PCs and computer clusters. This means that it is compatible with Cloud developments. To install it, you must have a radio card connected to the PC, which serves as the terminal, and a second PC, which serves as a relay station.

Next, depending on what we need to do, we can take only a part of the software implementation. For example, we can use commercial mobile terminals and attach to a network composed of an OpenAirInterface relay and a commercial network core. Any combination is possible. We have therefore established a complete network chain for 4G, which can move towards the 5G network using all of these software programs.

 

openairinterface

 

Who contributes to OpenAirInterface?

CB: Since the Alliance was established, we have had several types of contributors. The primary contributor, to date, has been EURECOM, because its teams are those that developed the initial versions of all the software programs. These teams include research professors, post-doctoral students, and PhD students who can contribute to this platform that provides participants with an experimental environment for their research. In addition, through the software Alliance, we have acquired new kinds of contributors: industrial stakeholders and research laboratories located throughout the world. We have expanded our base, and this openness enables us to receive contributions from both the academic and industrial worlds. (Editor’s note: Orange, TCL and Ercom are strategic OpenAirInterface partners, but the Alliance also includes many associate members, such as Université Pierre et Marie Curie (UPMC), IRT Bcom, INRIA and, of course, IMT. The full list is available here.)

 

What does the Carnot Label represent for your activities?

CB: The Carnot Label was significant in our relationship with the Beijing University of Posts and Telecommunications in China (BUPT), a university specializing in telecommunications. The BUPT asked us to provide a quality label reference that would allow us to demonstrate the recognition of our expertise. The Carnot Label was presented and recognized by the foreign university. This label demonstrates the commitment of OpenAirInterface developments to the industrial world, while also representing a seal of quality that is recognized far beyond the borders of France and Europe.

 

Why do companies and industrial stakeholders contact OpenAirInterface?

CB: To develop innovation projects, industrial stakeholders need advances in scientific research. They come to see us because they are aware of our academic excellence and they also know that we speak the same language. It’s in our DNA! Since its very beginning, EURECOM has embodied the confluence of industry and research; we speak both languages. We have developed our own platforms, we have been confronted with the same issues that industrial stakeholders face on a daily basis. We are therefore positioned as a natural intermediary between these two worlds. We listen attentively to the innovation projects they present.

 

You chose to develop your software suite as open source, why?

CB: It is a well-known model that is beginning to spread. It facilitates access to knowledge and contributions. This software is covered by open source licenses that protect contributors and enable wider dissemination. This acts as a driving force and an accelerator of development and testing, since each software component must be tested. If you multiply the introduction of this software throughout the world, everyone will be able to use it more easily. This enables a greater number of software tests, and therefore increases the amount of feedback from users for improving the existing versions. Therefore, the entire community benefits. This is a very important point, because even in industry, many components are starting to be developed using this model.

 

In addition to this approach, what makes OpenAirInterface unique?

CB: OpenAirInterface has brought innovation to open source software licensing. Many types of open source licenses exist. It is a vast realm, and the industrial world is bound to large patent portfolios. The context is as follows: on the one hand, there are our partners who have industrial structures that rely on revenue from patents and, on the other hand, there is a community who wants free access to software for development purposes. How can this apparent contradiction be resolved?

We have introduced a specific license to protect the software for non-commercial operations –everything related to research, innovation, tests – as for classic open source software. For commercial operations, we have established a patent declaration system. This means that if industrial stakeholders implement their own patented components, they need only indicate this and, for commercial operations, people will therefore contact the rights holders to negotiate. These conditions are known as FRAND (fair, reasonable and nondiscriminatory) terms, and reflect the practices industrial players in the field follow with standardization organizations such as GPP. In any case, this procedure has been well accepted. This explains why Orange and Nokia (formerly Alcatel-Lucent Bell Labs), convinced by the benefits of this type of software license, are featured among the Alliance’s strategic partners.

 

What is the next development phase for OpenAirInterface?

CB: Several areas of development exist. The projects that are proposed as part of the European H2020 program, for which we are awaiting the results, will allow us to achieve scientific advances and will benefit the software range. The Alliance has also defined major areas for development through joint projects led by both an industrial partner and an academic partner. This type of structure enables us to bring people together from around the world. They volunteer to participate in one of the steps towards achieving 5G.

 

 

[divider style=”normal” top=”20″ bottom=”20″]

The TSN Carnot institute, a guarantee of excellence in partnership-based research since 2006. 

Having first received the Carnot label in 2006, the Télécom & Société Numérique Carnot institute is the first national “Information and Communication Science and Technology” Carnot institute. Home to over 2,000 researchers, it is focused on the technical, economic and social implications of the digital transition. In 2016, the Carnot label was renewed for the second consecutive time, demonstrating the quality of the innovations produced through the collaborations between researchers and companies.

The institute encompasses Télécom ParisTech, IMT Atlantique, Télécom SudParis, Télécom, École de Management, EURECOM, Télécom Physique Strasbourg and Télécom Saint-Étienne, École Polytechnique (Lix and CMAP laboratories), Strate École de Design and Femto Engineering.

[divider style=”normal” top=”20″ bottom=”20″]