Arago, IMT Atlantique, Jean-Louis de Bougrenet

Arago, technology platform in optics for the industrial sector

Belles histoires, bouton, CarnotArago is a technology platform specializing in applied optics, based at IMT Atlantique’s Brest campus. It provides scientific services and a technical environment for manufacturers in the optics and vision sectors. It has unique expertise in the field of liquid crystals, micro-optics and health.

 

Are you in the industrial sector or a small business wanting to experiment with a disruptive innovation? Perhaps you are looking for innovative technological solutions, but don’t have the time, or the technical means, to do so? The solution might be to try your luck with a technology platform.

Arago can help you to turn your new ideas into a reality. Arago was designed to foster innovation and technology transfer in research. It is home to highly-skilled scientists and technological facilities. Nexter, Surys and Valéo are among the companies who have already placed their trust in the platform, a component of Carnot Télécom and Société Numérique, specialized in the field of optics. Arago has provided them with access to a variety of high-end technology solutions for design, modeling, creating micro-optics, electro-optic functions based on liquid crystals, and composite materials for vision and protection. The fields of application are varied, ranging from protective and corrective glasses to holographic marking. It all involves a great deal of technical expertise, which we discussed step by step with Jean-Louis de Bougrenet, researcher at IMT Atlantique and the creator of the Arago platform.

 

Health and impact measurement of new technologies

Health is a field which benefits directly from Arago’s potential. The platform can directly count on the Health Interest Group 3D Fovéa, which includes IMT Atlantique, the University Hospital of Brest, and INSERM, and which led to the creation of the spinoff Orthoptica (the French leader in digital orthoptic tools). Thanks to Orthoptica, the platform was able to set up Binoculus, an orthoptic platform.

Arago, IMT Atlantique, Jean-Louis de BougrenetThis operates as a digital platform and aims to replace the existing tools used by orthoptists, which are not very ergonomic. This technology consists solely of a computer, a pair of 3D glasses, and a video projector. It makes the technology more widely available by reducing the duration of tests. The glasses are made with shutter lenses, allowing the doctor to block each eye in synchronization with the material being projected. This othoptic tool is used to evaluate the different degrees of binocular vision. It aims to help adolescents who have difficulties with fixation or concentration.

Acting for health now is worthwhile, but anticipating the needs of the future is even better. In this respect, the platform plays an important role for ANSES[1] in the evaluation of risks involved in deploying new immersive technology such as the Oculus Vive augmented reality headsets. “When the visual system is involved, an impact study with scientific and clinical criteria is essential”, explains Jean-Louis de Bougrenet. These evaluations are based on in-depth testing carried out on clinical samples, in liaison with hospitals such as Necker, HEGP (Georges Pompidou) and the University Hospital of Brest.

 

Applications in diffractive micro-optics and liquid crystal

At the same time, Arago has the expertise of researchers with several decades of experience in liquid crystal engineering (Pierre-Gilles de Gennes laboratory). “These materials present significant electro-optical effects, enabling us to modulate the light in different ways, at very low voltage”, explains Jean-Louis de Bougrenet.

The researchers have used liquid crystals for many industrial purposes (protection goggles, spectral filters, etc.). In fact, liquid crystals will soon be part of our immediate surroundings without us even knowing. They are used in flat screens, 3D and augmented reality goggles, or as a camouflage technique (Smart Skin). To create these possibilities, Arago’s manufacturing and testing facilities are unique in France (including over 150m² of cleanrooms).

Other objects are also omnipresent in our daily lives, and even involve our smartphones: diffractive micro-optics. One of their specific features is that they exist in different sizes. Arago has all the tools necessary to design and produce these optics at different scales both individually and collectively, with easily-industrialized nano-imprint duplication processes. “We use micro-optics in many fields, for example manufacturing security holograms, biometric recognition, quality control and the automobile sector”, explains Jean-Louis de Bougrenet. Researchers recently set up a so-called two-photon photopolymerization technique, which allows the direct fabrication of fully 3D nanostrutures.

 

European ambitions

Arago is also involved in many other projects. Since 2016, it has hosted an IRT BCom platform. This platform is dedicated to creating very high-speed optical transmission systems in free space, for wireless connections for virtual reality headsets in environments such as the Cave Automatic Virtual Environment (CAVE).

Arago is already firmly established in Brittany, having recently finalized a technology partnership with the INL (International Iberian Nanotechnology Laboratory, a European platform similar to CERN). The partnership involves pooling resources, privileged access to technology, and the creation of joint European projects. The INL is unique in the European field of nanoscience and nanotechnology. Arago contributes the complementary material components it had been lacking. For Jean-Louis de Bougrenet, “in the near future, Arago will become part of the European technology cluster addressing new industrial actors, by adding to our offering and more easily integrating European programs with a sufficient critical mass. This partnership will enable us to develop our emerging activity in the field of intelligent sensors for the environment and biology”.

 

 [1] ANSES: French Agency for Food, Environmental and Occupational Health & Safety

Some examples of products developed through Arago

 

[one_half][box]

Night-vision driving glasses

These glasses are designed for driving at night, and prevent the driver being dazzled by car lights. They were designed in close collaboration with Valéo.

These glasses use two combined effects: absorption and reflection. This is possible due to a blend of liquid crystals and absorbent material, which creates a variable density. This technology is the result of several years’ research. Several patents have been registered, and have been licensed to the industrial partner.

[/box][/one_half]

[one_half_last][box]

Holographic marking to fight counterfeiting

This marking was entirely designed by Arago. Micro-optics are integrated in banknotes, for example, to fight counterfeiting. The come in the form of holographic films or strips. Their manufacture uses a copying system which reduces production costs, making mass production possible. The work was carried out in close collaboration with industrial partner Surys. Patents have been registered and transferred. The project also led to a copying machine being created for the industrial partner. The machine is currently being used by the company.

[/box][/one_half_last]

[box type=”shadow” align=”” class=”” width=””]

The TSN Carnot institute, a guarantee of excellence in partnership-based research since 2006

Having first received the Carnot label in 2006, the Télécom & Société numérique Carnot institute is the first national “Information and Communication Science and Technology” Carnot institute. Home to over 2,000 researchers, it is focused on the technical, economic and social implications of the digital transition. In 2016, the Carnot label was renewed for the second consecutive time, demonstrating the quality of the innovations produced through the collaborations between researchers and companies. The institute encompasses Télécom ParisTech, IMT Atlantique, Télécom SudParis, Télécom, École de Management, Eurecom, Télécom Physique Strasbourg and Télécom Saint-Étienne, École Polytechnique (Lix and CMAP laboratories), Strate École de Design and Femto Engineering.[/box]

Television, NexGenTV, Eurecom, Raphaël Troncy

The television of the future: secondary screens to enrich the experience?

The television of the future is being invented in the Eurecom laboratories, at Sophia-Antipolis. This is not about creating technologies for even bigger, sharper screens, but rather reinventing the way TV is used. In a French Unique Interministerial Fund (FUI) project named NexGenTV, researchers, broadcasters and developers have joined forces to achieve this goal. The project was launched in 2015 for a duration of three years, and is already showing results. Raphaël Troncy is a Eurecom researcher in data sciences who is involved in the project. He presents the progress that has been made to date and the potential for improvement, primarily based on enriching content with a secondary screen.

 

With the NexGenTV project, you are trying to reinvent the way we use TV. Where did this idea come from?

Raphaël Troncy: This is a fairly widespread movement in the audiovisual sector. TV channels are realizing that people are using their television screens less and less to watch their programs. They watch them through other modes, such as replay, or special mobile phone apps, and do other things at the same time. TV channels are pushing for innovative applications. The problem is that nobody really knows what to do, because nobody knows what users want. At Eurecom, we worked on an initial project, financed by the FP7 European program called LinkedTV. We worked with users to find out what they want, and what the channels want. Then with NexGenTV we focused on applications for a second screen, like a tablet, to offer enriched content to viewers, as well as affording TV channels the ability to maintain editorial content.

 

Although the project won’t be completed until next year, have you already developed promising applications?

RT: Yes, our technology has been used since last summer by the tablet app for the beIN Sports channels. The technology allows users to automatically select the highlights and access additional content for Ligue 1 football matches. Users can access events such as goals or fouls, they can see who was touching the ball at a given moment, or statistics on each player, all in real time. We are working towards offering information such as replays of similar goals by other players in the championship, or goals in previous matches by the player who has just scored.

 

In this example, what is your contribution as a researcher?

RT: The technology we have developed opens up several possibilities. Firstly, it collects and formats the data sent by service providers. For example, the number of kilometers a player covers, or images of the goals. This is challenging, because live conditions mean that this needs to happen within the few seconds between the real event and the moment it is broadcast to the user. Secondly, the technology performs semantic analysis, extracting data from players’ sites, official FIFA or French Football Federation sites, or Wikipedia, to provide a condensed version to the TV viewer.

 

 

Do you also perform image analysis, for example to enable viewers to watch similar goals?

RT: We did this for the first prototypes, but we realized that the data provided were rich enough already. However, we do analyze images for another use: the many political debates that are happening at present during the election period. There is not yet an application for this, we are developing it. But we practiced on the debates for the two primary elections, and we are carrying on this practice for the current and upcoming debates for the presidential and legislative elections. We would like to be able to put an extract of a candidate’s previous speech on the tablet while they are talking about a particular subject. Either because what they are saying is complementary, contradictory, or linked to a proposition that is relevant to their program. We also want to be able to isolate the “best moments” based on parallel activity on Twitter, or on a semantic analysis of the candidates’ speeches, and offer a condensed summary.

 

What is the value of image analysis in this project?

RT: For replays, image analysis allows us to better segment a program, to offer the viewer a frame of reference. But it also provides access to specific information. For example, during the last debate of the right-wing primary election, we measured the on-screen presence time of the candidates, using facial recognition programmed by deep learning. We wanted to see if there was a difference in the way the candidates were treated, or if there was an equal balance, as is the case with speaking time controlled by the CSA (French institution for media regulation). We realized that broadcasters’ choice was more heavily weighted towards Nicolas Sarkozy than the other candidates. This can be explained, because he was strongly challenged by the other candidates, and so he was focused on when he didn’t speak. But this also demonstrates how an image recognition application can provide viewers with keys to interpreting programs.

 

The goal of your technology is to give even more context, and inform the user?

RT: Not necessarily, we also have an example of use with an educational program broadcast on France Télévisions. In this case, we wanted to provide viewers with quizzes, to provide educational material. We are also working on adapting advertising to users for replay viewing. The idea is to make the most of the potential of secondary screens to improve the user’s experience.

 

[divider style=”normal” top=”20″ bottom=”20″]

NexGenTV: a consortium for inventing the new TV

The NexGenTV project combines researchers from Eurecom and the Irisa joint research unit (co-supervised by IMT). The consortium also includes companies from the audiovisual sector. Wildmoka is in charge of creating applications for secondary screens, along with Envivio (taken over by Ericsson) and Avisto. The partners are working in collaboration with an associated broadcasters club, who provide the audiovisual content required for creating the applications. The club includes France Télévisions, TF1, Canal+, etc.

[divider style=”normal” top=”20″ bottom=”20″]

cybersécurité, détecter, université paris saclay, detect

Cybersecurity: Detect and Conquer

Researchers from Université Paris-Saclay members are developing algorithms and visual tools to help detect and counteract cybersecurity failures.

 

You can’t fight what you can’t see. The protection of computer systems is a growing concern, with an increasing number of smart devices gathering our private data. Computer security has to cover hardware as well as software vulnerabilities, including network access. It needs to offer efficient countermeasures.But the first step to cybersecurity is to detect and identify intrusions and cyberattacks.

Usual attacks have adverse effects on the availability of a service (Denial of Service), try to steal confidential information or to compromise the service’s behavior by modifying the flow of events produced during an execution (that is, adding, removing or modifying events). They are difficult to detect in a highly distributed environment (like the cloud or e-commerce applications), where the order of the observed events is partially unknown.

Researchers from CentraleSupélec designed a new approach to tackle this issue. They used an automaton, modeling the correct behavior of a distributed application, and a list of temporal properties that the computation must comply with in any execution (“is always or never followed by”, “always precede”, etc.). The automaton is then able to generalize the model from a finite (thus incomplete) set of behaviors. It also avoids introducing incorrect behaviors in the model in the learning phase. Combining these two types of methods (automaton and list), the team managed to lower the rate of false positives (down to 2% in certain cases) and the mean time necessary to detect an intrusion (less than one second).

Another team from the same UPSaclay member chose a different approach. Researchers designed an intuitive visualization tool that helps to easily and automatically manage security alerts. Cybersecurity mechanisms raise large quantities of alerts, many of them being false-positive. VEGAS, for “Visualizing, Exploring and Grouping AlertS”, is a customizable filter system. It offers the front-line security operator (in charge of dispatching the alerts to security analysts) a simple 2D representation of the original dataset of alerts they receive. Alerts that are close in the original dataset are still close in the computed representation, while alerts that are distant stay distant. The officer can then select alerts that visually appear to belong to the same group, i.e. similar alerts, to generate a new rule to be inserted in the dispatching filter. That way, the amount of alerts the front-line security operator receives is reduced and security analysts only get the alerts they need to investigate further.

Those analysts could then use another visualization tool developed by a team from CNRS and Télécom SudParis to calculate the impact of cyber attacks and security countermeasures. Here, systems are given coordinates in multiple spatial, temporal and context dimensions. For instance, in order to access a web-server (resource) of a given organization, an external user (user account) connects remotely (spatial condition) to the system by providing their login and password (channel) at a given date (temporal condition).

In this geometrical model, an attack that compromises some resources using a given channel will be represented as a surface (square or rectangle). If it also compromises some users, it will be a parallelepiped. On the contrary, if we only know which resources are compromised, the attack will only affect one axis of the representation and be a line.

 

“Cybersecurity mechanisms raise large quantities of alerts.”

Researchers then geometrically determine the portion of the service that is under attack and the portion of the attack controlled by a given security measure. They can automatically calculate the residual risk (the percentage of the attack that is left untreated by any countermeasure) and the potential collateral damage (the percentage of the service that is not under attack but is affected by a given countermeasure). Such figures allow security administrators to compare the impact of multiple attacks and/or countermeasures in complex attack scenarios. Administrators are able to measure the size of cyber events, identify vulnerable elements and quantify the consequences of attacks and measures.

But what if the attack takes place directly in the hardware? Indeed, when outsourcing their circuits, companies can not be assured that no malicious circuit, such as a hardware trojan horse, has been introduced. Researchers from Télécom ParisTech proposed a metric to measure the impact of the size and location of a trojan: using this metric, there is a probability superior to 99% (with a false negative rate of 0.017%) of detecting a hardware trojan bigger than 1% of the original circuit.

More recently, the same team designed a sensor circuit to detect “electromagnetic injection”, an intentional fault injection utilized to steal secret information hidden inside integrated circuits. This sensor circuit has a high fault detection coverage and a small hardware overhead. So maybe you can fight what you can’t see, or at least you can try to. You just have to be prepared!

 

References
Gonzalez-Granadillo et al. A polytope-based approach to measure the impact of events against critical infrastructures. Journal of Computer and System Sciences, Volume 83, Issue 1, February 2017, Pages 3-21
Crémilleux et al. VEGAS: Visualizing, exploring and grouping alerts. NOMS 2016 – 2016 IEEE/IFIP Network Operations and Management Symposium, Istanbul, 2016, pp. 1097-1100
Totel et al. Inferring a Distributed Application Behavior Model for Anomaly Based Intrusion Detection. 2016 12th European Dependable Computing Conference (EDCC), Gothenburg, 2016, pp. 53-64
Miura et al. PLL to the rescue: A novel EM fault countermeasure. 2016 53nd ACM/EDAC/IEEE Design Automation Conference (DAC), Austin, TX, 2016, pp.

 

[divider style=”normal” top=”5″ bottom=”5″]

The original version of this article has been published in l’Édition of l’Université Paris Saclay.

 

supply chain management, SCM, industrial engineering, Mines Albi, génie industriel

Supply chain management: Tools for responding to unforeseen events

The inauguration of IOMEGA took place on march 14, 2017. This demonstration platform is designed to accelerate the dissemination of contributions by Mines Albi researchers in the industrial world, particularly concerning their expertise in Supply Chain Management. Matthieu Lauras, an industrial engineering researcher, is already working on tools to help businesses and humanitarian agencies manage costs and respond to unforeseen events.

 

First there were Fordism and Toyotism and then came Supply Chain Management (SCM). So much for rhyming. During the 1990s businesses were marked by globalization of trade and offshoring. They began working in networks and relying on information and communication technologies which were constantly changing. It was clear that the industrial organization used in the past would no longer work. Researchers named this revolution Supply Chain Management.

Twenty-five years later SCM has come a long way and has become a discipline in its own right. It aims to manage all of the various flows (materials, information, cash) which are vital to a business or a network of businesses.

Supply Chain Management today

Supply chain management considers the entire network: from suppliers to final users of a product (or service). Matthieu Lauras, an Industrial Engineering researcher at Mines Albi, gives an example. “For the yogurt supply chain, there are the suppliers of raw materials (milk, sugar, flour…) then purchasing of containers to make the cups and boxes, etc.” Supply chain management coordinates all these various flows in order to manufacture products on schedule and deliver them to the right place, in keeping with the planned budget.

SCM concerns all sectors of activity from the manufacturing industry to services. It has become essential to a business’s performance and management. But there is room for improvement. Up until now, the tools created have been devoted to cost control. “The competitiveness problem that businesses are currently facing is no longer linked to this element. What now interests them is their ability to detect disruptions and react to them. That’s why our researchers are focusing on supply chain agility and resilience,” explains Matthieu Lauras. At Mines Albi, researchers are working on improving SCM tools using a blend of IT and logistics skills.

Applied research to better handle unforeseen events

A number of elements can disrupt the proper functioning of supply chains. On one hand, markets are constantly changing, making it difficult to estimate production volumes. On the other hand, globalization has made transport subject to greater variations. “The strength of a business lies in its ability to handle disruptions,” notes Matthieu Lauras. This is why researchers are developing new tools which are better suited to these networks. “We are working on detecting differences between what was planned and what is really happening. We’re also developing decision-making support resources in order to enhance decision-makers’ ability to adapt. This helps them take corrective action in order to react quickly and effectively to unforeseen events,” explains the researcher.

As a first step, researchers are concentrating on the resistance and resilience of the network. They have set up research designs based on simulations of disruptions in order to evaluate the chain’s response to these events. Modeling makes it possible to test different scenarios and evaluate the impact of a disruption according to its magnitude and location in the supply chain. “We are working on a concrete case as part of the Agile Supply Chain Chair with Pierre Fabre. For example, this involves evaluating if a purchaser’s network of suppliers would potentially be able to face significant variations in demand. It is also important to determine if the purchaser could maintain his activity in acceptable conditions in the event of a sudden default of one of these partners,” explains Matthieu Lauras

New technology for real-time monitoring of supply chains

Another area of research is real-time management. “We use connected devices because they allow us to obtain information at any time about the entire network. But this information arrives in a jumble…that’s why we are working on tools based on artificial intelligence to help ‘digest’ it and pass on only what is necessary to the decision-maker,” says the researcher.

In addition, these tools are tested through collaborations with businesses and final users. “Using past data, we observe the level of performance of traditional methods in a disrupted situation. Performance is measured in terms of fill percentage, cycle time (time it takes between a certain step and delivery for example), etc. Then we simulate the performance we would obtain using our new tools. This allows us to measure the differences and demonstrate the positive impact,” explains Matthieu Lauras.

Industry partners then provide the opportunity to conduct field experiments. If the results are confirmed, partners like Iterop carry out the necessary development of commercial devices which then serve a wider range of users. Founded in 2013 by two former Mines Albi PhD students, the start-up Interopsys develops and markets software solutions for simplifying the collaboration between the personnel of a company and their information system.

A concrete example: The Red Cross

Mines Albi researchers are working on determining strategic locations around the world for the Red Cross to pre-position supplies, thus enabling the organization to respond to natural disasters more quickly. Unlike businesses, humanitarian agencies do not strive to make a profit but rather to control costs. This gives them a greater scope of action and allows them to take action in a greater number of operational areas for the same cost.

Matthieu Lauras explains: “Our research has helped reorganize the network of warehouses used by this humanitarian agency. When a crisis occurs, it must be able to make the best choices for the necessary suppliers and mode of transport. However, it does not currently have a way to measure the pros and cons of these different modes. For example, it focuses on its list of international suppliers but does not consider local suppliers. So we have decision-making support tools for planning and taking action in the short term in order to make decisions in an emergency situation.

But is it possible to transpose techniques from one sector to another? Naturally, researchers have identified this possibility, which is referred to as cross-learning. Supply chains in the humanitarian sector already function with agility, while businesses control costs. “We take the best practices from one sector and use them in another. The difficulty lies in successfully adapting them to very different environments,” explains Matthieu Lauras. In both cases, this applied research has proven to be successful and will only continue to expand in scope. The arrival of the IOMEGA platform should help researchers perform practical tests and reduce the time required for implementation.

 

[box type=”shadow” align=”” class=”” width=””]

IOMEGA: Mines Albi’s industrial engineering platform

This platform, which was inaugurated on March 14, 2017, makes it possible for Mines Albi to demonstrate tools for product design and configuration as well as for designing information systems for crisis management, risk management for projects and supply chain management.

Most importantly it offers decision-making support tools for complex and highly collaborative environments. For this, the platform benefits from experiment kits for connected devices and computer hardware with an autonomous network. This technology makes it possible to set up experiments under the right conditions. An audiovisual system (video display wall, touchscreen…) is also used for the demonstrations. This helps potential users immerse themselves in configurations that mimic real-life situations.

IOMEGA was designed to provide two spaces for scenario configuration on which two teams may work simultaneously. One uses conventional tools while the other tests those from the laboratory.

A number of projects have already been launched involving this platform, including the Agile Supply Chain Chair in partnership with Pierre Fabre, the AGIRE joint laboratory dedicated to the resilience of businesses in association with AGILEA (a supply chain management consulting firm). Another project is a PhD dissertation on the connected management of flows of urgent products with the French blood bank (EFS). In the long term, IOMEGA should lead to new partnerships for Mines Albi. Most importantly, it strives to accelerate the dissemination of researchers’ contributions to the world of industry and users.

© Mines Albi

[/box]

Les chercheurs, cyber-remparts des infrastructures critiques, critical infrastructures, cyber protection

Researchers, the cyber-ramparts of critical infrastructures

Cyber protection for nuclear power stations or banks cannot be considered in the same way as for other structures. Frédéric Cuppens is a researcher at IMT Atlantique and leader of the Chair on Cybersecurity of Critical Infrastructures. He explains his work in protecting operators whose correct operation is vital for our country. His Chair was officially inaugurated on 11 January 2017, strengthening state-of-the-art research on cyberdefense.

 

The IMT chair you lead addresses the cybersecurity of critical infrastructures. What type of infrastructures are considered to be critical?

Frédéric Cuppens: Infrastructures which allow the country to operate correctly. If they are attacked, failure could place the population at risk or be seriously damaging to the execution of essential services for citizens. A variety of domains are covered by the operators of these infrastructures’ activities, and this diversity is illustrated in our Chair’s industrial partners, which include stakeholders in energy generation and distribution — EDF; telecommunications — Orange and Nokia; defense — Airbus Defence and Space. There are also sectors which are perhaps initially less obvious but which are just as important such as banks and logistics, and for this we are working with La Société Générale, BNP Paribas and La Poste[1].

Also read on I’MTech: The Cybersecurity of Critical Infrastructures Chair welcomes new partner Société Générale

 

The Chair on Cybersecurity of Critical Infrastructures is relatively recent, but these operators did not wait until then to protect their IT systems. Why are they turning to researchers now?

FC: The difference now is that more and more automatons and sensors in these infrastructures are connected to the internet. This increases the vulnerability of IT systems and the severity of potential consequences. In the past, an attack on these systems could crash internal services or slow production down slightly, but now there is a danger of major failure which could put human lives at risk.

 

How could that happen in concrete terms?

FC: Because automatons are connected to the internet, an attacker could quite conceivably take control of a robot holding an item and tell it to drop it on someone. There is an even greater risk if the automaton handles explosive chemical substances. Another example is an attack on a control system: the intruder could see everything taking place and send false information. In a combination of the two examples, the danger is very great: an attacker could take control of whatever he wants and make it impossible for staff members to react by preventing control.

 

How do you explain the vulnerability of these systems?

FC: Systems in traditional infrastructures, like the ones in question, were computerized a long time ago. At this point, they were isolated from an IT point of view. As they weren’t designed to be connected to the internet, the security must now be updated. Today, cameras or automatons can still possess vulnerabilities because their primary function is to film and handle objects and not necessarily be resistant to every possible kind of attack. This is why our role is first and foremost to detect and understand the vulnerabilities of these tools depending on their cases of use.

 

Are cases of use important for the security of a single system?

FC: Of course, and measuring the impact of an attack according to an IT system’s environment is at the core of our second research focus. We develop adapted measurements to identify the direct or potential consequences of an attack, and these measurements will obviously have different values depending on whether an attacked automaton is on a military boat or in a nuclear power station.

 

In this case, can your work with each partner be reproduced for protecting other similar infrastructures, or is it specific to each case?

FC: There are only a limited number of automaton manufacturers for critical applications: there must be 4 or 5 major suppliers in the world. The cases of use do of course have an effect on the impact of the intrusion, but the vulnerabilities remain the same. Part of what we do can therefore be reproduced, of course. On the other hand, we have to be specific with regard to the measurement of impact. The same line is taken by ordering institutions in research. The projects of Investments for the Future programs on both the French national and European H2020 scale strongly encourage us to work on specific cases of use. That said, we still sometimes address topics that are not linked to a particular case of use, but which are more general.

 

The new topics that the Chair plans to address include a project called Cybercop 3D for visualizing attacks in 3D. It seems a rather unusual concept at first sight.

FC: The idea is to improve control tools, which are currently similar to a spreadsheet with different colored lines to facilitate visualizing the data on the condition of the IT system. We could use 3D technology to allow computer engineers to view a model of places where intrusions are taking place in real time, and improve the visibility of correlations between events. This would also provide a better understanding of attack scenarios, which are currently presented as 2D tree views and quickly become unreadable. 3D technology could improve their readability.

 

The issue in hand is therefore to improve the human experience in measures taken against the attacks. What is the importance of this human factor?

FC: It is vital. As it happens, we are planning to launch a research topic on this subject by appointing a researcher specializing in qualitative psychology. This will be a cross-cutting topic, but will above all complement our third focus which develops decision-making tools to provide the best advice for people in charge of rolling out countermeasures in the event of an attack. The aim is to see whether, from a psychological point of view, the solution proposed to humans by the decision-making tool will be interpreted correctly. This is important because, in this environment, staff are used to managing accidental failure and do not necessarily respond by thinking it is a cyber attack. It is therefore necessary to make sure that when the decision-making tool proposes something, it is understood correctly. This is all the more important given that the operators of critical systems do not follow an automatization rationale. It is still humans who control what happens.

 

[1] In addition to the operators of critical infrastructures mentioned, the Chair’s partners also include Amossys, a company specializing in cybersecurity expertise. In addition, there are institutional partners with Région Bretagne and FEDER, Fondation Télécom and IMT’s schools: IMT Atlantique, Télécom ParisTech and Télécom SudParis.

 

 

 

GDPR, chair Values and policies of personal information

Personal data: How the GDPR is changing the game in Europe

The new European regulation on personal data will become officially applicable in May 2018. The regulation, which complements and strengthens a European directive from 1995, guarantees unprecedented rights for citizens, including the right to be forgotten, the right to data portability, and the right to be informed of security failures in the event of a breach involving personal data… But for these measures to be effective, companies in the data sector will have to be in agreement. However, they have little time to comply with this new legislation that, for most companies, will require major organizational changes. Failure to make these changes will expose them to the risk of heavy sanctions.

 

With very little media coverage, the European Union adopted the new General Data Protection Regulation (GDPR) on April 27, 2016. Yet this massive piece of legislation, featuring 99 articles, includes plenty of issues that should arouse the interest of European citizens. Because, starting on May 25, 2018, when the regulation becomes officially applicable in the Member States, users of digital services will acquire new rights: the right to be forgotten, in the form of a right to be dereferenced, an increased consideration of their consent to use or not use their personal data, increased transparency on the use of this data… And the two-year period, from the moment the regulation was adopted to the time of its application, is intended to enable companies to adapt to these new constraints.

However, despite this deferment period, Claire Levallois-Barth, coordinator of the IMT chair Values and policies of personal information (VPIP) assures us that “two years is a very short period”. The legal researcher bases this observation on the work she has carried out among the companies she interviewed. Like many stakeholders in the world of digital technology, they find themselves facing new concepts introduced by the GDPR. Starting in 2018, for example, they must ensure their customers’ right to data portability. Practically speaking, each user of a digital service will have the option of taking his or her personal data to a competitor, and vice versa.

Claire Levallois-Barth, coordinatrice de la chaire VPIP.

Claire Levallois-Barth, coordinator of the chair Values and policies of Personal information

Two years does not seem very long for establishing structures that will enable customers to exercise this right to data portability. Because, although the regulation intends to ensure this possibility, it does not set concrete procedures for accomplishing this: “therefore, it is first necessary to understand what is meant, in practical terms, by a company ensuring its customers’ right to data portability, and then define the changes that must be made, not only in technical terms, but also in organizational terms, including the revision of current procedures and even the creation of new procedures,” explains Claire Levallois-Barth.

The “privacy by design” concept, which is at the very heart of the GDPR, and symbolizes this new way of thinking about personal data protection in Europe, is just as restricting for organizations. It requires the integration of all of the principles that govern the use of personal data (principles of purpose, proportionality, duration of data storage, transparency…) in advance, beginning at the design phase for a product or service. Furthermore, the regulation is now based on the principle of responsibility, which implies that the company itself must be able to prove that it respects this legislation by keeping updated proof of its compliance. The design phases for products and services, as well as the procedures for production and use must therefore be revised in order to establish internal governance procedures for personal data. According to Claire Levallois-Barth, “for the most conscientious companies, the first components of this new governance were presented to the executive committee before the summer of 2016.

 

Being informed before being ready

While some companies are in a race against time, others are facing problems that are harder to overcome. During the VPIP Chair Day held last November 25th, dedicated to the Internet of things, Yann Padova, the Commissioner specializing in personal data protection at the French Energy Regulatory Commission (CRE), warned that “certain companies do not yet know how to implement the new GDPR regulations.” Not all companies have access to the skills required for targeting the organizational levers that must be established.

For example, the GDPR mentions the requirement, in certain cases, for a company that collects or processes users’ data, to name a Data Protection Officer (DPO). This expert will have the role of advising the data controller—in other words, the company—to ensure that it respects the new European regulation. But depending on the organization of major groups, some SMEs will only play a subcontracting role in data processing: must they also be prepared to name a DPO? The companies are therefore faced with the necessity of quickly responding to many questions, and clear-cut answers do not always exist. And another reality is even more problematic: some companies are not at all informed of the contents of the GDPR.

Yann Padova, commissaire à la CRE.

Yann Padova, CRE Commissioner

Yann Padova points out that before they can be ready, companies must be aware of the challenges. Yet he recognizes that he “does not see many government actions in France that explain the coming regulations.” Joining him to discuss this subject on November 25, lawyer Denise Lebeau-Marianna—in charge of personal data protection matters at the law firm of Baker & McKenzie—confirmed this lack of information, and not only in France. She cited a study on companies’ readiness for the GDPR that was carried out by Dimensional Research and published in September 2016. Out of 821 IT engineers and company directors in the data sector, 31% had heard about the GDPR, but were not familiar with its contents, and 18% had never heard of it.

 

Without sufficient preparation, companies will face risks… and sanctions

For Claire Levallois-Barth, it seems obvious that with all of these limits, not all companies will comply with all aspects of the GDPR by 2018. So, what will happen then? “The GDPR encourages companies to implement protection measures that correspond to the risk level their personal data processing activities present. It is therefore up to companies to quantify and assess this risk. They then must eliminate, or at least reduce the risks in some areas, bearing in mind that the number of data processing operations is in the tens or even hundreds for some companies,” she explains. What will these areas be? That depends on each company, what it offers its users and its ability to adapt within two years.

And if these companies are not able to comply with the regulations in time, they will be subject to potential sanctions. One of the key points of the GDPR is an increase in fines for digital technology stakeholders that do not comply with their obligations, especially regarding user rights. In France, the CNIL could previously impose a maximum penalty of €150,000 before the Law for a Digital Republic increased this amount to €3 million. But the GDPR, a European regulation with direct application, will repeal this part of French regulation in May 2018, imposing penalties of up to €20 million euros or 4% of a company’s total annual worldwide turnover.

The new European Committee for data protection—currently called G29—will be in charge of organizing this regulation. This organization, which combines all of the European Union CNILs, has just published its first three notices on the regulation issues that require clarification, including portability and the DPO. This should remove some areas of uncertainty surrounding the GDPR, the biggest of which remains the question of the GDPR’s real, long-term effectiveness.

Because, although in theory the regulation proposed by the EU is aimed at better protecting users’ personal data in our digital environment, and at simplifying administrative procedures, many points still seem unclear. “Until the regulation has come into effect and the European Commission has published the implementing acts presenting the regulation, it will be very difficult to tell whether the protection for citizens will truly be reinforced,” Claire Levallois-Barth concludes.

 

 

Carnot TSN, Scalinx, electronics

Scalinx: Electronics, from one world to another

Belles histoires, bouton, CarnotThe product of research carried out by its founder, Hussein Fakhoury, at the Télécom ParisTech laboratories (part of the Télécom & Société numérique Carnot institute), Scalinx is destined to shine as a French gem in the field of electronics. By developing a new generation of analog-to-digital converters, this startup is attracting the attention of stakeholders in strategic fields such as the defense and space sectors. These components are found in all electronic systems that interface analog and digital functions, whose performance depends on the quality of the converters they use.

 

We live in an analog world, whereas machines exist in a digital world,” Hussein Fakhoury explains. According to this entrepreneur, founder of the startup Scalinx, all electronic systems must therefore feature a component that can transform analog magnitudes into digital values. “This converter plays a vital role in enabling computers to process information from the real world,” he insists. Why is this? It makes it possible to transform a value that is continually changing over time, like electrical voltage, into digital data that can be processed by computer systems. And designing this interface is precisely what Hussein Fakhoury’s startup specializes in.

Scalinx develops next generation analog-to-digital converters. Based on a different architectural approach than that used by its competitors, the components it has developed offer many advantages for applications that require a fast digitization system. “By using a new electronic design for the structure, we provide a much more compact solution that consumes less energy,” the startup founder explains. However, he points out that the Scalinx interfaces “are not intended to replace the historical architectural in every circumstance, since these historical structures are essential for certain applications.

Hussein Fakhoury, the founder of Scalinx

These new converters are intended for specific markets, in which the performance and the efficient use of space are of upmost importance. This is the case in the space electronics, defense, and medical imaging sectors. For this last example, a prime example is ultrasound. While today we can see the fetus in a woman’s womb in two dimensions using ultrasound technology, medical imaging is increasingly moving towards 3D visualization. However, to transition from 2D to 3D, probes must be used that use more converters. With the traditional architectures, the heat dissipation would become too great, and would not only damage the probe, but could inconvenience the patient.

And the obstacles are not only of a technical nature; they are also strategic. The quality of an electronic system depends on this analog/digital interface. Quality is therefore of utmost importance for high-end systems. Currently, however, “the global leaders for high-performance components in this field are American,” Hussein Fakhoury observes. Yet the trade regulations, as well as issues of sovereignty and confidentiality of use can represent a limit for European stakeholders in critical areas like the defense sector.

 

A spin-off from Télécom ParisTech set to conquer Europe

Scalinx therefore wants to become a reference in France and Europe for converters intended for applications that cannot sacrifice energy consumption for the sake of performance. For now, the field appears to be open. “Few companies want to take on this strategic market,” the founder explains. The startup’s ambition seems to be taking shape, since it benefited from two consecutive years of support from Bpifrance as the winner of the national i-Lab contest for business start-up assistance in 2015 and 2016. It also received an honor loan from The Fondation Télécom in 2016.

Scalinx’s level of cutting-edge technology in the key area of analog-digital interfaces can be attributed to the fact that its development took place in an environment conducive to state-of-the-art innovation. Hussein Fakhoury is a former Télécom ParisTech researcher (part of the Télécom & Société numérique Carnot institute), and his company is a spin-off that has been carefully nurtured to maturity. “Already in 2004, when I was working for Philips, I thought the subject of converters was promising, and I began my research work in 2008 to improve my technical knowledge of the subject,” he explains.

Then, between 2008 and the creation of Scalinx in 2015, several partnerships were established with industrial stakeholders, which resulted in the next generation of components that the startup is now developing. NXP — the former Philips branch specialized in semiconductors—France Télécom (now Orange) and Thalès collaborated with the Télécom ParisTech laboratory to develop the technology that is today being used by Scalinx.

With this wealth of expertise, the company is now seeking to develop its business and acquire new customers. Its business model is based on a “design house” model, as Hussein Fakhoury explains: “The customers come to see us with detailed specifications or with a concept, and we produce a turnkey integrated circuit that matches the technical specifications we established together.” This is a concept the founder of Scalinx hopes to further capitalize on as he pursues his ambition of European conquest, an objective he plans to meet over the course of the next five years.

 

[box type=”shadow” align=”” class=”” width=””]

The TSN Carnot institute, a guarantee of excellence in partnership-based research since 2006

Having first received the Carnot label in 2006, the Télécom & Société numérique Carnot institute is the first national “Information and Communication Science and Technology” Carnot institute. Home to over 2,000 researchers, it is focused on the technical, economic and social implications of the digital transition. In 2016, the Carnot label was renewed for the second consecutive time, demonstrating the quality of the innovations produced through the collaborations between researchers and companies. The institute encompasses Télécom ParisTech, IMT Atlantique, Télécom SudParis, Télécom, École de Management, Eurecom, Télécom Physique Strasbourg and Télécom Saint-Étienne, École Polytechnique (Lix and CMAP laboratories), Strate École de Design and Femto Engineering.[/box]

SEAS, ITEA

How the SEAS project is redefining the energy market

The current energy transition has brought with it new energy production and consumption modes. Coordinated by Engie, the european SEAS project aims to foster these changes to create a more responsible energy market. SEAS is seeking to invent the future of energy usage by facilitating the integration of new economic stakeholders to redistribute energy, as well as increasing the energy management options offered to individuals. These ideas have been made possible through the contributions of researchers from several IMT graduate schools (IMT Atlantique, Mines Saint-Étienne, Télécom ParisTech and Télécom SudParis). Among these contributions, two innovations are supported by IMT Atlantique and Mines Saint-Étienne.

 

An increasing number of people are installing their own energy production tools, such as solar panels. This breaks with the traditional energy model of producer-distributor-consumer”. Redefining the stakeholders in the energy chain, as noted by Guillaume Habault, IMT Atlantique computer science researcher, is at the heart of the issue addressed by the Smart Energy Aware Systems (SEAS) project. The project was completed in December, after three years of research as part of the European ITEA program. It brought together 34 partners from 7 countries, one of which was IMT in France. On 11 May, the SEAS project won the ITEA Award of Excellence, in acknowledgement for the high quality of its results.

The project is especially promising as it does not only involve individuals wanting to produce their own energy using solar panels. New installations such as wind turbines provide new sources of energy on a local scale. However, this creates complications for stakeholders in the chain such as network operators: their energy production is erratic, as it is dependent on the seasons and the weather. Yet it is important to be able to foresee energy production in the very short term in order to ensure that every consumer is supplied. Overestimating the production of a wind farm or a neighborhood equipped with solar panels means taking the risk of not having enough energy to cope with a lack in production, and ultimately causing power cuts for residents. “Conversely, underestimating production means having to store or dispatch the surplus energy elsewhere. Poor planning can create problems in the network, and even reduce the lifespan of some equipment” the researcher warns.

 

An architecture for smart energy grid management

Among the outcomes of the SEAS is a communication architecture capable of gathering all information from different production and consumption modes locally, almost in real time. “The ideal goal is to be able to inform the network in 1-hour segments: with this length of time, we can avoid getting information about user consumption that is excessively precise, while anticipating cases of over- or under-consumption,” explains Guillaume Habault, the creator of the architecture.

For individuals, SEAS may take the form of an electric device that can transmit information about their consumption and production to their electricity provider. “This type of data will allow people to optimize their power bills,” the researcher explains. “By having perfect knowledge of the local energy production and demand at a given moment, residents will be able to tell if they should store the energy they produce, or redistribute it on the network. With the help of the network, they may also decide what time would be the most economical to recharge their electric car, according to electricity prices, for instance.”

 

 

These data on the current state of a sub-network point to the emergence of new stakeholders, known as “flexibility operators”. First of all, because optimizing your consumption by adapting the way you use each appliance in the house requires specific equipment, and takes time. While it is easy to predict that energy will be more expensive at times of peak demand, such as in the evenings, it is more difficult to anticipate the price of electricity according to how strong the wind is blowing in a wind farm located several dozen kilometers away. It is safe to say that with suitable equipment, some individuals will be inclined to delegate their energy consumption optimization to third-party companies.

The perspectives of intelligent energy management offered by SEAS go beyond the context of the individual. If the inhabitants of a house are away on holiday, couldn’t the energy produced by their solar panels be used to supply the neighborhood, thus taking pressure off a power plant located a hundred kilometers away? Another example: refrigerators operate periodically, they don’t cool constantly, but rather at intervals. In a neighborhood, or a city, it would therefore be possible to intelligently shift the startup time of a group of these appliances to outside peak hours, so that an already heavily-used network can be concentrated on the heaters people switch on when they return home from work.

Companies are particularly keen to get these types of services. Load management allows them to temporarily switch off machines that are not essential to their service in exchange for a payment to those who are in charge of this load management. The SEAS architecture incorporates communication security in order to ensure trust between stakeholders. In particular, personal data are decentralized: each party owns their own data and can decide not only to allow a flexibility operator to have access to them, but can also determine their granularity and level of use. “an individual will have no trouble accepting that their refrigerator cools at different times from usual, but not that their television gets cut off while they are watching it,” says Guillaume Habault. “And companies will want to have even more control over whether machines are switched off or on.”

 

Objects that speak the same language

In order to achieve such efficient management of electricity grids, the SEAS project turned to the semantic web expertise of Mines Saint-Étienne. “The semantic web is a set of principals and formalisms that are intended to allow machines to exchange knowledge on the web”, explains Maxime Lefrançois, head researcher in developing the knowledge model for the SEAS project. This knowledge model is the pivotal language that allows objects to be interoperable in the context of energy network management.

Up to now, each manufacturer had their own way of describing the world, and the machines made by each company evolved in their own worlds. With SEAS, we used the principles and formalisms of the semantic web to provide machines with a vocabulary allowing them to “talk energy”, to use open data that exists elsewhere on the web, or to use innovative optimization algorithms on the web”, says the researcher. In other words, SEAS proposes a common language enabling each entity to interpret a given message in the same way. Concretely, this involves giving each object a URL address, which can be consulted in order to obtain the information on it, in particular to find out what it can do and how to communicate with it. Maxime Lefrançois adds, “We also contributed to principles and formalisms of the semantic web with a series of projects aimed at making it more accessible to companies and machine designers, so that they could adapt their existing machines and web services to the SEAS model at a lower cost”.

Returning to a previous example, using this extension of the web makes it possible to adapt two refrigerators of different brands so that they can communicate, agree on the way they operate, and avoid creating a consumption peak by starting up at the same time. In terms of services, this will allow flexibility operators to create solutions without being limited by the languages specific to each brand. As for manufacturers, it is an opportunity for them to offer household energy management solutions that go beyond simple appliances.

Thanks to the semantic web, communication between machines can be more easily automated, improving the energy management service proposed to the customer. “All these projects point to a large-scale deployment,” says Maxime Lefrançois. Different levels of management can thus be envisioned. Firstly, for households, for coordinating appliances. Next, for neighborhoods, redistributing the energy produced by each individual according to their neighbors’ needs. Finally on a regional or even national scale, for coordinating load management for overall consumption, relieving networks in cases of extremely cold temperatures, for example. The SEAS project could therefore change things on many levels, offering new modes of more responsible energy consumption.

 

This article is part of our dossier Digital technology and energy: inseparable transitions!

 

SEAS, ITEA

 

[divider style=”normal” top=”20″ bottom=”20″]

SEAS wins an “ITEA Award of Excellence for Innovation and Business impact”

Coordinated by Engie, and with one of its main academic partners being IMT, SEAS won an award of excellence on May 11 at the Digital Innovation Forum 2017 in Amsterdam. This award recognizes the relevance of the innovation in terms of its impact on the industry.

[divider style=”normal” top=”20″ bottom=”20″]

Smart cities, Ville intelligente : « Ce n’est que par une recherche pluridisciplinaire que les défis seront relevés »

Smart cities: “it is only through multidisciplinary research that we can rise to these challenges”

The smart city is becoming an increasingly tangible reality for citizens in urban areas, with the efforts made to increase mobility and energy management being obvious examples. But is more efficient transport and optimized energy consumption sufficient to define a smart city? Being a member of the jury of the international Prizes Le Monde-Smart Cities that will be awarded in Singapor on June 2, Deputy President of IMT Francis Jutand explained to us why smart cities must be considered in a general and systemic way.

 

Is it possible to reduce smart cities to cities with lower energy consumption?

Francis Jutand: Definitely not. The impact of digital technology on cities goes far beyond energy-saving issues, even if this is an important aspect of it. Of course, it allows smart technology to be used in energy monitoring for buildings and vehicles, but digital technology also plays in important role in managing mobility and interactions. For example, it eliminates the need for physical transport by allowing for telecommuting, coworking and exchanges of information in general. It could even allow for a more adaptive organization of mobility — although there is a long way still to go in this matter.

 

What do you mean by more adaptive organization?

FJ: One of the problems affecting cities is congestion linked to peaks in traffic. Managing congestion is a tricky systemic challenge which has to combine a number of solutions, such as organization of work, staggered management of office opening hours, proaction and dynamic reaction. There is a whole organizational infrastructure to be established, to which digital technology can contribute.

 

Besides the digitization of services, will smart cities also be a source of apprehension for citizens?

FJ: Digital technology allows us to provide new functionalities. Everyone experiences digital technology and its services and perceives a certain number of obvious advantages. Digital technology also concerns future problems to be resolved. In the case of digital cities, one of the most interesting ones is anticipating its growing complexity. Infrastructures are being digitized and can be interfaced. At the same time, humans are benefitting from increased capacities for interaction, while at the same time autonomous entities are being developed — such as autonomous cars — which incorporate intelligent elements that also have a high capacity for interaction with infrastructures. Therefore, there needs to be an efficient management of exchanges between agents, humans and infrastructures.

 

Is digital technology the only field that must addressed when considering the city of the future? 

FJ: Smart and sustainable cities — I always add the word “sustainable”, because it is vital — must be considered from several perspectives. In terms of research, the subjects concerned are digital technology and big data, of course, but also supply chains, air quality, social and economic impacts etc. It is only through multidisciplinary research that we can truly rise to these challenges. This is what we try to do at Institut Mines-Télécom, with schools that are very active in their area and involved in local projects linked to smart cities. In addition to their strength in research, they are an important lever for innovation for designing products and services linked to smart and sustainable cities, and more particularly by fostering entrepreneurship through their students.

 

If digital technology is not the only subject of reflection for cities of the future, why does it seem to be an ever-present topic of discussion?

FJ: In the currents temporality, the technologies that increase our capacity are digital technologies. They lead to the most innovation. They are used not only for automation, but also for developing interactions and providing algorithmic intelligence and autonomy in different products and services. Interaction implies connection. I would add that it is also necessary to manage the securing of transactions both in terms of reliability of operations and prevention of malicious actions. Today, digital technology is a driving force as well a guide, but the unique thing about it is that it comes out in waves. It is therefore necessary to combine short and long-term views of its impact and work on creativity and innovation. This is why openness and accessibility of data are important points.

 

Is a smart city necessarily one in which all data is open?

FJ: The debate on this matter is too often caricatural and simplified through the question of “should data be open or not?”. In reality, the debate plays out on a different level. Data is not static, and the needs vary. There is a cost to supplying raw data. An extreme position in favor of complete openness would very quickly become financially impossible, and it would be difficult to produce the new data we need. Besides this, there is the issue of data enrichment: we must be able to encourage approaches for a common commodity in which any citizen can work on the data, as well as commercial approaches for developing new services. The balance is hard to find, and will probably depend on the characteristics of each city.

 

You mentioned the cost of digital technology and development, and its energy impact. If local governments can’t bear the entire cost, how can we guarantee homogeneous development within a city or between cities?

 FJ: First of all, it’s true that there are sometimes concerns about the idea that digital technology itself consumes a lot of energy. We must remember that, for the moment, the proportion of a city’s overall energy consumption accounted for by digital technology is very small compared with buildings and transport. Secondly, given that local governments can’t bear the full cost, it is not inconceivable that private-sector-based initiatives will foster and generate differences in the city or between cities. It is extremely difficult to plan the homogenization of cities, nor is it desirable because they are living, and therefore evolving, entities.

The most likely outcome is that sustainable smart cities will develop per district with purely private offerings that will be naturally selective because they will target solvent markets, but which will also leave room for equally welcome civic initiatives. The whole process will be regulated by local government. But this is something we are used to: it’s typically the case with fiber optic broadband and its roll-out. In any case, it is essential to make public policies clear. If we don’t make them clear, people may react by adopting a defensive precautionary position and refusing the development of smart cities. For now, this is not the case, and lots of cities such as Lyon, Rennes, Bordeaux, Nice, Montpellier, Grenoble, Paris, Nantes are determinedly tackling the problem.

 

Could the rise of connected cities lead to the development of new networks between megacities?

FJ: Megacities are increasingly powerful economic entities all over the world. A general expansion of the economic power of cities is also taking place. There are elements of an economic impetus which could lead to shared forms of mutualization or innovation that go much further than previous twinning projects, or even competition. It is therefore likely that economic competition between nations will move toward competition between megacities and the areas that support them.

 

ISS, télécommunication spatiale, Space Telecommunication

What is space telecommunication? A look at the ISS case

Laurent Franck is a space telecommunications researcher at IMT Atlantique. These communication systems are what enable us to exchange information with far-away objects (satellites, probes…). These systems also enable us to communicate with the International Space Station (ISS). This is a special and unusual case compared to the better-known example of satellite television. The researcher explains how these exchanges between Earth and outer space take place.

 

Since his departure in November 2016, Thomas Pesquet has continued to delight the world with his photos of our Earth as seen from the sky. It’s a beautiful way to demystify life in space and make this profession—one that fascinates both young and old—more accessible. We were therefore able to see that the members of Expedition 51 aboard the ISS are far from lost in space. On the contrary, Thomas Pesquet was able to cheer on the France national rugby union team on a television screen and communicate live with children from different French schools (most recently on February 23, in the Gard department). And you too can follow this ISS adventure live whenever you want. But how is this possible? To shed some light on this issue, we met with Laurent Franck, a researcher in space telecommunications at IMT Atlantique.

 

What is the ISS and what is its purpose?

Laurent Franck: The ISS is a manned international space station. It accommodates international teams from the United States, Russia, Japan, Europe and Canada. It is a scientific base that enables scientific and technological experiments to be carried out in the space environment. The ISS is situated approximately 400 kilometers above the earth’s surface. But it is not stationary in the sky, because when something is in orbit at this altitude, the laws of physics make the object rotate at a faster speed than the Earth’s rotation. It therefore follows a circular orbit around our planet at a speed of 28,000 kilometers per hour, enabling it to orbit the Earth in 93 minutes.

 

How can we communicate with the ISS?

LF: Not by wire, that’s for sure! We can communicate directly, meaning between a specific point on Earth and the space station. To do this, it must be visible above us. We can get around this constraint by going through an intermediary. One or several satellites that are situated at a higher elevation can then be used as relays. The radio wave goes from the Earth to the relay satellite, and then to the space station, or vice versa. It is all quite an exercise in geometry. There are approximately ten American relay satellites in orbit. They are called TDRS (Tracking and Data Relay Satellite). Europe has a similar system called EDRS (European Data Relay System).

 

Why are these satellites located at a higher altitude than that of the space station?

LF: Let’s take a simple analogy. I take a flashlight and shine it on the ground. I can see a ring of light on the ground. If I raise the flashlight higher off the ground, this circle gets bigger. This spot of light represents the communication coverage between the ground and the object in the air. The ISS is close to the Earth’s surface, and therefore it only covers a small part of the Earth, and this coverage is moving. Conversely, if I take a geostationary satellite at an altitude of 36,000 kilometers, the coverage is greater and corresponds to a fixed point on the Earth. Not only are few satellites required in order to cover the Earth’s surface, but the ISS can also sustainably communicate, via the geostationary satellite, with a ground station that is also located within this area of coverage. Thanks to this system, only three or four ground stations are required to permanently communicate with the ISS.

 

Is live communication with the ISS truly live?

LF: There is a slight time lag, for two reasons. First, there is the time the signal takes to physically travel from point A to point B. This time is related to the speed of light. Therefore, it takes 125 milliseconds to reach a geostationary satellite (television or satellite relays). We then must add the distance between the satellite and the ISS. This results in a travel time that is incompressible–since it is physical–of a little over a quarter of a second. Or half a second to travel there and back. This first time lag is easily observable when we watch the news on television: the studio asks a question and the reporter on the ground seems to wait before answering, due to the time needed to receive the question via satellite and send the reply!

Secondly, there is a processing time, since the information travels through telecommunications equipment. This equipment cannot process the information at the speed of light. Sometimes the information is stored temporarily to accommodate the processor speed. It’s like when I have to wait in line at a counter. There’s the time the employee at the counter takes to do their job, plus the wait time due to all the people in line in front of me. This time can quickly add up.

We can exchange any kind of information with the ISS. Voice and image, of course, as well as telemetry data. This is the information a spacecraft sends to the earth to communicate its state of health. Included in this information is the station’s position, the data from the experiments carried out on board, etc.

 

What are the main difficulties the spatial telecommunications systems experience?

LF: The major difficulty is linked to the fact that we must communicate with objects that are very far away and have limited electrical transmission power. We record these constraints in an energy link budget. This involves several phenomena. The first is that the farther away we communicate, the more energy is lost. With the distance, the energy is dispersed like a spray. The second phenomenon involved in this budget is that the quality of communication depends on the amount of energy received at the destination. We ask: out of one million bits that are transmitted, how many are false when they arrive at the destination? Finally, the last point is the output rate that is possible for the communication. This also depends on the amount of energy invested in the communication. We often adjust the output rate to obtain a certain level of quality. It all depends on the amount of energy available for transmission. This is limited aboard the ISS, since it is powered via solar panels and sometimes travels in the Earth’s shadow. The relay satellites have the same constraints.

 

Is there a risk of interference when the information is travelling through space?

LF: Yes and no, because radio frequency telecommunications are highly regulated. The right to transmit is linked to a maximum frequency and power. It is also regulated in space: we cannot “spill over” into another nearby antenna. For space communications, there are tables that define the maximum amount of energy that we can send outside of the main direction of communication. Below this maximum level, the energy that is sent to a nearby antenna is of course interference, but it will not prevent it from functioning properly.

 

What are the other applications of communications satellites?

LF: They are used for Internet access, telephony, video telephony, the Internet of things… But what is interesting is what they are not used for: GPS navigation and weather observations, for example. In fact, space missions are traditionally divided into four components: the telecommunications we are discussing here, navigation/positioning, observation, and deep-space exploration like the Voyager probes. Finally, what is fascinating is that with a field as specialized as that of space, there is an almost infinite amount of even more specialized derivations.