ERC Grants, Francesco Andriulli, Yanlei Diao, Petros Elia, Roisin Owens

4 ERC Consolidator Grants for IMT

The European Research Council has announced the results of its 2016 Call for Consolidator Grants. Out of the 314 researchers to receive grants throughout Europe (across all disciplines), four come from IMT schools.

 

10% of French grants

These four grants represent 10% of all grants obtained in France, with 43 project leaders awarded from French institutions (therefore placing France in 3rd position, behind the United Kingdom with 58 projects and Germany with 48 projects).

For Christian Roux, the Executive Vice President for Research and Innovation at IMT, “this is a real recognition of the academic excellence of our researchers on a European level. Our targeted research model, which performs well in our joint research with the two very active Carnot institutes, will also benefit from ERC’s more fundamental work to support major scientific breakthroughs.”

Consolidator Grants reward experienced researchers with a sum of € 2 million to fund projects for a duration of five years, therefore providing them with substantial support.

 

[one_half][box]Francesco Andriulli, Télécom Bretagne, ERCAfter Claude Berrou in 2012, Francesco Andriulli is the second IMT Atlantique researcher to be honored by Europe as part of the ERC program. He will receive a grant of €2 million over five years, enabling him to develop his work in the field of computational electromagnetism. Find out more +
[/box][/one_half]

[one_half_last][box]Yanlei Diao, ERC, Télécom ParisTechYanlei Diao, a world-class scientist, recruited jointly by École Polytechnique, the Inria Saclay – Île-de-France Centre and Télécom ParisTech, has been honored for scientific excellence for her project as well as her innovative vision in terms of “acceleration and optimization of analytical computing for big data”. [/box][/one_half_last]

[one_half][box]

Petros Elia, Eurecom, ERCPetros Elia is a professor of Telecommunications at Eurecom and has been awarded this ERC Consolidator Grant for his DUALITY project (Theoretical Foundations of Memory Micro-Insertions in Wireless Communications).
Find out more +

[/box][/one_half]

[one_half_last][box]

Roisin Owens, Mines Saint-Étienne, ERCThis marks the third time that Roisin Owens, a Mines Saint-Étienne researcher specialized in bioelectronics, has been rewarded by the ERC for the quality of her projects. She received a Starting Grant in 2011 followed by a Proof of Concept Grant in 2014.
Find out more +
[/box][/one_half_last]

Noël Crespi, Networks and new services, Télécom SudParis, Internet of Things, IoT

Networks and new services: A complete story

This book shines a spotlight on software-centric networks and their emerging service environments. The authors examine the road ahead for connectivity, for both humans and ‘things’, considering the rapid changes that have shaken the industry.

The book analyses major catalytic shifts that are shaping the communications world: softwarization, virtualization, and cloud computing. It guides the reader through a maze of possible architectural choices, driven by discriminating and sometimes conflicting business considerations. The new ICT capabilities lead us toward smarter environments and to an ecosystem where applications are backed up by shared networking facilities, instead of networks that support their own applications. Growing user awareness is a key driver towards the softwarization process.

Softwarization disrupts the current status quo for equipment, development, networks, operations and business. It changes radically the value chain and the involved stakeholders. The dilemma is between a ‘slow burn’ traditional step-by-step approach and a bold transformation of the whole infrastructure and business models. This book is essential reading for those seeking to build user-centric communication networks that support independent agile and smart applications. See more

 

About the authors

 

Roberto Minerva, Networks and new services, IoTRoberto Minerva, has a Master Degree in Computer Science from Bari University, Italy, and a Ph.D in computer Science and Telecommunications from Telecom Sud Paris, France. Roberto is the head of Innovative Architectures group within the Future Centre in the Strategy Department of Telecom Italia. His job is to create advanced scenarios derived from the application of emerging ICT technologies with innovative business models especially in the area of IoT, distributed computing, programmable networks and personal data. he is currently involved in Telecom Italia activities related to Big Data, architecture for IoT, and ICT technologies for leveraging Cultural Heritage.

Noël Crespi holNoël Crespi, Télécom SudParis, Networks and new services, IoTds Masters degrees from the Universities of Orsay and Canterbury, a Diplome d’ingénieur from Telecom ParisTech and a Ph.D and Habilitation from Paris VI University. He joined Institut Mines-Telecom in 2002 and is currently Professor and MSc Programme Director, leading the Service Architecture Laboratory. He coordinates the standardisation activities for Institut Telecom at ETSI, 3GPP and ITU-T. He is also adjunct professor at KAIST (Korea), and is on the 4-person Scientific Advisory Board of FTW (Austria). His current research interests are in Service Architectures, Communication Services, Social Networks, and Internet of Things/Services. He is the author/co-author of 250 articles and contributions in standardisation. See more

 

Noël Crespi, Networks and New Services, Internet of ThingsNetworks and New Services: A Complete Story
Roberto Minerva, Noël Crespi
Springer, 2017
Series “Internet of Things”
186 pages
100,21 € (hardcover) – 79,72 € (eBook)

Buy this book

chaire AXA, Maurizio Filippone, Eurecom

Accurate Quantification of Uncertainty. AXA Chair at Eurecom

AXA Chairs reward only a few scientists every year. With his chair on New Computational Approaches to Risk Modeling, Maurizio Filippone a researcher at Eurecom joins a community of prestigious researchers such as Jean Tirole, French Professor who won the Nobel prize in economics.

 

Maurizio, you’ve just been awarded an AXA chair. Could you explain what it is about and what made your project selected?

AXA chairs are funded by the AXA Research Fund, which supports fundamental research to advance our understanding of risk. Started in 2008, the AXA Chair scheme funds about 50 new projects annually, of which four to eight are chairs. They are individual fellowships, and the one I received is going to support my research activities for the next seven years. My project is entitled New Computational Approaches to Risk Modeling”. The AXA Chair selection process is not based on the project only. For this type of grant, several criteria are important: timeliness, vision, credibility of both the proposal and the candidate (track record, collaborations, etc.), institution and fit within institution’s strategy. For example, the fact that the research area of this topic is in line with the Eurecom long-term strategy in Data science played a major role in the selection process of my project. This grant definitely represents a major achievement in my career.

 

What is your project about exactly?

My project deals with one simple question: How do you go from data to decisions? Today, we can access so much data generated by so many sensors, but we are facing difficulties in using these data in a sensible way. Machine learning is the main technique that helps make sense of data and I will use and develop novel techniques in this domain throughout this project. Quantification of risk and decision-making require accurate quantification of uncertainty, which is a major challenge in many areas of sciences involving complex phenomena like finance, environmental and medical sciences. In order to accurately quantify the level of uncertainty, we employ flexible and accurate tools offered by probabilistic nonparametric statistical models. But today’s diversity and abundance of data make it difficult to use these models. The goal of my project is to propose new ways to better manage the interface between computational and statistical models – which in turn will help get accurate confidence on predictions based on observed data.

 

How will you be able to do that? With what kind of advanced computing techniques?

The idea behind the project is that it is possible to carry out exact quantification of uncertainty relying exclusively on approximate, and therefore cheaper, computations. Using nonparametric models is difficult and generally computationally intractable due to the complexity of the systems and amount of data. Although computers are more and more powerful, exact computations remain serial, too long, too expensive and sometimes almost impossible to carry out. The way approximate computations will be designed in this project will be able to reduce computing time by orders of magnitude! The exploitation of parallel and distributed computing on large scale computing facilities – which is a huge expertise at Eurecom – will be key to achieve this. We will thus be able to develop new computer models that will make accurate quantification of uncertainty possible.

 

What are the practical applications?

Part of the focus of the project will be on life and environmental applications that require quantification of risk.  We will then use mostly life sciences data (e.g., neuroimaging and genomics) and environmental data for our models. I am confident that this project will help tackle the explosion of large scale and diverse data in life and environmental sciences. This is already a huge challenge today, and it will be even more difficult to deal with in the future. In the mid-term, we will develop practical and scalable algorithms that learn from data and accurately quantify their uncertainty on predictions. On the long term, we will be able to improve on current approaches for risk estimation: they will be timely and more accurate. These approaches can have major implications in the development of medical treatment strategies or environmental policies for example. Is some seismic activity going to trigger a tsunami for which it is worth warning the population or not? Is a person showing signs of a systemic disease, like Parkinson, actually going develop the disease or not? I hope the results of our project will make it easier to answer these questions.

 

Do you have any partnerships in this project?

Of course! I will initiate some new collaborations and continue collaborating with several prestigious institutions worldwide to make this project a success:  Columbia University in NYC, Oxford, Cambridge, UCL and Glasgow in the UK, the Donders Institute of Neuroscience in the Netherlands, New South Wales in Australia, as well as INRIA in France. The funding from the AXA Research Fund will help create a research team at Eurecom: the team will comprise myself and two PhD students and one Post Doc. I would like the team to comprise a blend of expertise, since novelty requires an interdisciplinary approach: computing, statistics, mathematics, physics, plus some expertise in life and environmental sciences.

 

What are the main challenges you will be facing in this project?

Attracting talents is one of the main challenges! I’ve been lucky so far, but it is generally difficult. This project is extremely ambitious; it is a high-risk, high-gain project, so there are some difficult technical challenges to face – all of them related to the cutting-edge tools, techniques and strategies we will be using and developing. We will find ourselves in the usual situation when working on something new and visionary – namely, being stuck in blind alleys or being forced to dismiss promising ideas that do not work to give some examples. But that is why it has been funded for 7-years! Despite these difficulties, I am confident this project will be a success and that we will make a huge impact.

 

French National Library

The French national Library is combining sociology and big data to learn about its Gallica users

As a repository of French culture, the Bibliothèque Nationale de France (BnF, the French national Library) has always sought to know and understand its users. This is no easy task, especially when it comes to studying the individuals who use Gallica, its digital library. To learn more about them, without limiting itself to interviewing sample individuals, the BnF has joined forces with Télécom ParisTech, taking advantage of its multidisciplinary expertise. To meet this challenge, the scientists are working with IMT’s TeraLab platform to collect and process big data.

[divider style=”normal” top=”20″ bottom=”20″]

 

[dropcap]O[/dropcap]ften seen as a driving force for technological innovation, could big data also represent an epistemological revolution? The use of big data in experimental sciences is nothing new; it has already proven its worth. But the humanities have not been left behind. In April 2016, the Bibliothèque Nationale de France (BnF) leveraged its longstanding partnership with Télécom ParisTech (see box below) to carry out research on the users of Gallica — its free, online library of digital documents. The methodology used is based in part on the analysis of large quantities of data collected when users visit the website.

Every time a user visits the website, the BnF server records a log of all actions carried out by the individual on Gallica. This information includes pages opened on the website, time spent on the site, links clicked on the page, documents downloaded etc. These logs, which are anonymized in compliance with the regulations established the CNIL (French Data Protection Authority), therefore provide a complete map of the user’s journey, from arriving at Gallica to leaving the website.

With 14 million visits per year, this information represents a large volume of data to process, especially since it must be correlated with the records of the 4 million documents available for consultation on the site — which include the type of document, creation date, author etc. — which also provide valuable information for understanding users and their interest in documents. Carrying out sociological fieldwork alone, by interviewing larger or smaller samples of users, is not enough to capture the great diversity and complexity of today’s online user journeys.

Researchers at Télécom ParisTech therefore took a multidisciplinary approach. Sociologist Valérie Beaudouin teamed up with François Roueff to establish a dialogue between the sociological analysis of uses through field research, on one hand, and data mining and modeling on the other. “Adding this big data component allows us to use the information contained in the logs and records to determine the typical behavioral profiles and behavior of Gallica users,” explains Valérie Beaudouin. The data is collected and processed on IMT’s TeraLab platform. The platform provides researchers with a turnkey working environment that can be tailored to their needs and offers more advanced features than commercially-available data processing tools.

Also read on I’MTech TeraLab and La Poste have teamed up to fight package fraud

What are the different profiles of Gallica users?

François Roueff and his team were tasked with using the information available to develop unsupervised learning algorithms in order to identify categories of behavior within the large volume of data. After six months of work, the first results appeared. The initial finding was that only 10 to 15% of Gallica users’ browsing activity involves consulting several digital documents. The remaining 85 to 90% of users represent occasional visits, for a specific document.

We observed some very interesting things about the 10 to 15% of Gallica users involved,” says François Roueff. “If we analyze the Gallica sessions in terms of variety of types of documents (monographs, press, photographs etc.), eight out of ten categories only use a single type,” he says. This reflects a tropism on the part of users toward a certain form of media. When it comes to consulting documents, in general there is little variation in the ways in which Gallica users obtain information. Some search for information about a given topic solely by consulting photographs, while others consult solely press articles.

According to Valérie Beaudouin, the central focus of this research lies in understanding such behavior. “Using these results, we develop hypotheses, which must then be confirmed by comparing them with other survey methodologies,” she explains. Data analysis is therefore supplemented by an online questionnaire to be filled out by Gallica users, field surveys among users, and even by equipping certain users with video cameras to monitor their activity in front of their screens.

[tie_full_img]Photo d'une affiche de communication de la Bibliothèque nationale de France (BnF) avec pour slogan "Êtes-vous déjà entré à l'intérieur d'une encyclopédie ?", octobre 2016. Pour l'institution, rendre la culture accessible au public est une mission cruciale, et cela passe par un accès aux ressources numériques adapté aux utilisateurs.[/tie_full_img]

Photo from a poster for the Bibliothèque Nationale de France (BnF), October 2016. For the institution, making culture available to the public is a crucial mission, and that means digital resources must be made available in a way that reflects users’ needs.

 

Field studies have allowed us to understand, for example, that certain groups of Gallica users prefer downloading documents so they can read them offline, while others would rather consult them online to benefit from the high-quality zoom feature,” she says. The Télécom ParisTech team also noticed that in order to find a document on the digital library website, some users preferred to use Google and include the word “Gallica” in their search, instead of using the website’s internal engine.

Confirming the hypotheses also means working closely with teams at BnF, who provide knowledge about the institution and the technical tools available to users.  Philippe Chevallier, project manager for the Strategy and Research delegation of the cultural institution, attests to the value of dialogue with the researchers: “Through our discussions with Valérie Beaudouin, we learned how to take advantage of the information collected by community managers about individuals who are active on social media, as well as user feedback received by email.”

Analyzing user communities: a crucial challenge for institutions

The project has provided BnF with insight into how existing resources can be used to analyze users. This is another source of satisfaction for Philippe Chevallier, who is committed to the success of the project. “This project is the proof that knowledge about user communities can be a research challenge,” he says with excitement. “It’s too important an issue for an institution like ours, so we need to dedicate time to studying it and leverage real scientific expertise,” he adds.

And when it comes to Gallica, the mission is even more crucial. It is impossible to see Gallica users, whereas the predominant profile of users of BnF’s physical locations can be observed. “A wide range of tools are now available for companies and institutions to easily collect information about online uses or opinions: e-reputation tools, web analytics tools etc. Some of these tools are useful, but they offer limited possibilities for controlling their methods and, consequently, their results. Our responsibility is to provide the library with meaningful, valuable information about its users and to do so, we need to collaborate with the research community,” says Philippe Chevallier.

In order to obtain the precise information it is seeking, the project will continue until 2017. The findings will offer insights into how the cultural institution can improve its services. “We have a public service mission to make knowledge available to as many people as possible,” says Philippe Chevallier. In light of observations by researchers, the key question that will arise is how to optimize Gallica. Who should take priority? The minority of users who spend the most time on the website, or the overwhelming majority of users who only use it sporadically? Users from the academic community— researchers, professors, students — or the “general public”?

The BnF will have to take a stance on these questions. In the meantime, the multidisciplinary team at Télécom ParisTech will continue its work to describe Gallica users. In particular, it will seek to fine-tune the categorization of sessions by enhancing them with a semantic analysis of the records of the 4 million digital documents. This will make it possible to determine, within the large volume of data collected, which topics the sessions are related to. The task poses modeling problems which require particular attention, since the content of the records is intrinsically inhomogeneous: it varies greatly depending on the type of document and digitization conditions.

 

[divider style=”normal” top=”20″ bottom=”20″]

plusOnline users: a focus for the BnF for 15 years

The first study carried out by the BnF to describe its online user community dates back to 2002, five years after the launch of its digital library, in the form of a research project that already combined approaches (online questionnaires, log analysis etc.). In the years that followed, digital users became an increasingly important focus for the institution. In 2011, a survey of 3,800 Gallica users was carried out by a consulting firm. Realizing that studying users would require more in-depth research, the BnF turned to Télécom ParisTech in 2013 with the objective of assessing the different possible approaches for a sociological analysis of digital uses. At the same time, BnF launched its first big data research to measure Gallica’s position on the French internet for World War I research. In 2016, the sociology of online uses and big data experiment components were brought together, resulting in the project aiming to understand the uses and users of Gallica.[divider style=”normal” top=”20″ bottom=”20″]

 

Eurecom, HIGHTS, Autonomous car, H2020

The autonomous car: safety hinging on a 25cm margin

Projets européens H2020Does an autonomous or semi-autonomous car really know where it is located on a map? How accurately can it position itself on the road? For the scientists who are part of the European H2020 “HIGHTS” project, intelligent transportation systems must know their position down to one quarter of a meter. Jérôme Härri, a researcher in communication systems at Eurecom — a partner school for this project — explains how the current positioning technology must be readjusted to achieve this level of precision. He also explains why this involves a different approach than the one used by manufacturers such as Tesla or Google.

 

You are seeking solutions for tracking vehicles’ location within a margin of 25 centimeters. Why this margin?

Jérôme Härri: It is the car’s average margin for drifting to the right or left without leaving its traffic lane. This distance is found both in the scientific literature and in requests from industrial partners seeking to develop intelligent transportation. You could say it’s the value at which driving autonomously becomes possible while ensuring the required safety for vehicles and individuals: greater precision is even better; less precision, and things get complicated

 

Are we currently far from this spatial resolution? With what level of precision do the GPS units in most of our vehicles locate us on the road?

JH: A basic GPS can locate us with an accuracy of 2 to 10 meters, and the new Galileo system promises an accuracy of 4m. But this is only possible when there is sufficient access to satellites and in an open, or rural area. In the urban context, tall buildings make satellites less accessible and reaching a level of accuracy under 5 meters is rare. The margin of error is then reduced by projection, so that the user only rarely experiences such a large error in the positioning. But this does not work for an autonomous car. Improvements to GPS systems do exist, such as differential GPS, which can position us with an accuracy of one meter, or even less. Real time kinematic technology (RTK), used for cartography in the mountains, is even more efficient. Yet it is expensive, and also has its limits in the city. RTK technology is becoming increasingly popular for use in the dynamics of digital cities, but we have not yet reached that point.

 

And yet Google and Tesla are already building their autonomous or semi-autonomous cars. How are these cars being positioned?

JH: The current autonomous cars use a positioning system on maps that is very precise, down to the traffic lane, which combines GPS and 4G. However, this system is slow. It is therefore used for navigation, so that the car knows what it must do to reach its destination, but not for detecting danger. For this aspect, the cars use radar, lidars — in other words, lasers ­— or cameras. But this system has its limits: the sensors can only see around 50 meters away. However, on the highway, cars travel at a speed of 30, even 40 meters per second. This gives the autonomous car one second to stop, slow down, or adapt in the event of a problem… Which is not enough. And the system is not infallible. For example, the Tesla car accident that occurred last May was caused by the camera that is supposed to detect danger confusing a truck’s light color with the light color of the sky.

 

What approaches are you taking in the HIGHTS project for improving the geolocation and reliability?

JH: We want to know within a 25-centimeter margin where a vehicle is located on the road, not just in relation to another car. In order to do this, we use the cooperation between vehicles to triangulate and reduce the effect of a weak GPS signal. We consider that every vehicle nearby can be an anchor for the triangulation. For example, an autonomous car can have a weak GPS signal, but can have three surrounding cars with a better signal. We can increase the car’s absolute positioning by triangulating its position in relation to three nearby vehicles. In order to do this, we need communication technologies for exchanging GPS positions — Bluetooth, Zizbee, Wi-Fi, etc. — and technology such as cameras and radar in order to improve the positioning in relation to surrounding vehicles.

 

And what if the car is isolated, without any other cars nearby?

JH: In the event that there are not enough cars nearby, we also pursue an implicit approach. Using roadside sensors at strategic locations, it is possible to precisely locate the car on the map. For example, if I know the distance between my vehicle and a billboard or traffic light, and the angles between these locations and the road, I can combine this with the GPS position of the billboard and traffic light, which don’t move, making them very strong positioning anchors. We therefore combine the relative approach with the absolute position of the objects on the road. Yet this situation does not occur very frequently. In most cases, what enables us to improve the accuracy is the cooperation with other vehicles.

 

So, does the HIGHTS project emphasize the combination of different existing technologies rather than seeking to find new ones?

JH: Yes, with the aim of validating their effectiveness. Yet at the same time we are working on developing LTE telecommunication networks for the transmission of information from vehicle to vehicle — which we refer to as LTE-V2X. In so doing we are seeking to increase the reliability of the communications. Wi-Fi is not necessarily the most robust form of technology. On a computer, when the Wi-Fi isn’t working, we can still watch a movie. But for cars, the alternative V2X technology ensures the communications if the Wi-Fi connection fails, whether it is by accident or due to a cyber-attack. Furthermore, these networks provide the possibility of using pedestrians’ smartphones to help avoid collisions. With the LTE networks, HIGHTS is testing the reliability of the device-to-device LTE approach for inter-vehicle communication. Our work is situated upstream of the standardization work. The experience of this project enables us to work beyond the current standards and develop them along with organizations such as ETSI-3GPP, ETSI-ITS and IETF.

 

Does your cooperative approach stand a chance of succeeding against the individualistic approach used by Tesla and Google, who seek to remain sovereign regarding their vehicles and solutions? 

JH: The two approaches are not incompatible. It’s a cultural issue. Americans (Google, Tesla) think “autonomous car” in the strictest sense, without any outside help. Europeans, on the other hand, think “autonomous car” in the broader sense, without the driver’s assistance, and therefore are more likely to use a cooperative approach in order to reduce costs and improve the interoperability of the future autonomous cars. We have been working on the collaborative aspect for several years now, which has included research on integrating cars into the internet of things, carried out with the CEA and BMW — which are both partners of the HIGHTS project. We therefore have some very practical and promising lines of research on our side. And the U.S. Department of Transportation has issued a directive requiring vehicles to have a cooperative unit beginning in 2019. Therefore, Google and Tesla can continue to ignore this technology, but since they will be present in vehicles and made freely available to them, there’s a good chance they will use it.

 

[box type=”shadow” align=”” class=”” width=””]

HIGHTS: moving towards a demonstration platform

Launched in 2015, the 3-year HIGHTS project answers the call made by the H2020 research program on the theme of smart, green, and integrated transportation. It brings together 14 academic and industrial partners[1] from five different countries, and includes companies that work closely with major automakers like BMW. Its final objective is to establish a demonstration platform for vehicle positioning solutions, from the physical infrastructure to the software.

[1] Germany: Jacobs University Bremen, Deutsche Zentrum für Luft- und Raumfahrt (DLR), Robert Bosch, Zigpos, Objective Software, Ibeo Automotive Systems, Innotec21.
France: Eurecom, CEA, BeSpoon.
Sweden
: Chalmers University of Technology.
Luxemburg: FBConsulting.
The Netherlands
: PSConsultancy, TASS International Mobility Center.

[/box]

Claudine Guerrier, Security, Privacy

Security and Privacy in the Digital Era

“The state, that must eradicate all feelings of insecurity, even potential ones, has been caught in a spiral of exception, suspicion and oppression that may lead to a complete disappearance of liberties.”
—Mireille Delmas Marty, Libertés et sûreté dans un monde dangereux, 2010

This book will examine the security/freedom duo in space and time with regards to electronic communications and technologies used in social control. It will follow a diachronic path from the relative balance between philosophy and human rights, very dear to Western civilization (at the end of the 20th Century), to the current situation, where there seems to be less freedom in terms of security to the point that some scholars have wondered whether privacy should be redefined in this era. The actors involved (the Western states, digital firms, human rights organizations etc.) have seen their roles impact the legal and political science fields.

 

Author Information

Claudine Guerrier is Professor of Law at the Institut Mines-Télécom and the Télécom École de Management in Paris, France. Her research focuses on the tense relationship between technology, security and privacy.

 

Security, Privacy, digital eraSecurity and Privacy in the Digital Era
Claudine Guerrier
Wiley-ISTE, 2016
284 pages
108,70 € (hardcover) – 97,99 € (E-book)

Read an excerpt and order online

 

Ocean Remote sensing, data, IMT Atlantique

Ocean remote sensing: solving the puzzle of missing data

The satellite measurements that are taken every day rely greatly on atmospheric conditions, the main cause of missing data. In a scientific publication, Ronan Fablet, a researcher at Télécom Bretagne, proposes a new method for reconstructing the temperature of the ocean surface to complete incomplete observations. This reconstructed data provides fine-scale mapping of the homogeneous details that are essential in understanding the many different physical and biological phenomena.

 

What do a fish’s migration through the ocean, a cyclone, and the Gulf Stream have in common? They can all be studied using satellite observations. This is a theme Ronan Fablet appreciates. As a researcher at Télécom Bretagne, he is particularly interested in processing satellite data to characterize the dynamics of the ocean. This designation involves several themes, including the reconstruction of incomplete observations. Missing data impairs satellite observations and limits the representation of the ocean, its activities and interactions. This represents essential components used in various areas, from the study of marine biology to ocean-atmosphere exchanges that directly influence the climate. In an article published in June 2016 in the IEEE J-STARS[1] Ronan Fablet proposed a new statistical interpolation approach for compensating for the lack of observations. Let’s take a closer look at the data assimilation challenges in oceanography.

 

Temperature, salinity…: the oceans’ critical parameters

In oceanography, the name of a geophysical field refers to its fundamental parameters of sea surface temperature (or SST), salinity (quantity of salt dissolved in the water), water color, which provides information on the primary production (chlorophyll concentrations), and the altimetric mapping (ocean surface topography).

Ronan Fablet’s article focuses on the SST for several reasons. First of all, the SST is the parameter that is measured the most in oceanography. It benefits from high-precision or high-resolution measurements. In other words, a relatively short distance of one kilometer separates two observed points, unlike salinity measurements, which have a lower level of precision (distance of 100km between two measurement points). Surface temperature is also an input parameter that is often used to design digital models for studying ocean-atmosphere interactions. Many heat transfers take place between the two. One obvious example is cyclones. Cyclones are fed by pumping heat from the oceans’ warmer regions. Furthermore, the temperature is also essential in determining the major ocean structures. It allows surface currents to be mapped on a small-scale.

But how can a satellite measure the sea surface temperature? As a material, the ocean will react differently to a given wavelength. “To study the SST, we can, for example, use an infrared sensor that first measures the energy. A law can then be used to convert this into the temperature,” explains Ronan Fablet.

 

Overcoming the problem of missing data in remote sensing

Unlike the geostationary satellites that orbit at the same speed as the Earth’s rotation, moving satellites generally complete one orbit in a little over 1 hour and 30 minutes. This enables them to fly over several terrestrial points in one day. They therefore build images by accumulating data. Yet some points in the ocean cannot be seen. The main cause of missing data is satellite sensors’ sensitivity to atmospheric conditions. In the case of infrared measurements, clouds block the observations. “In a predefined area, it is sometimes necessary to accumulate two weeks’ worth of observations in order to benefit from enough information to begin reconstructing the given field,” explains Ronan Fablet. In addition, the heterogeneous nature of the cloud cover must be taken into account. “The rate of missing data in certain areas can be as high a 90%,” he explains.

The lack of data is a true challenge. The modelers must find a compromise between the generic nature of the interpolation model and the complexity of its calculations. The problem is that the equations that characterize the movement of fluids, such as water, are not easy to process. This is why these models are often simplified.

 

A new interpolation approach

According to Ronan Fablet, the techniques that are being used do not take full advantage of the available information. The approach he proposes reaches beyond these limits: “we currently have access to 20 to 30 years of SST data. The idea is that among these samples we can find an implicit representation of the ocean variations that can identify an interpolation. Based on this knowledge, we should be able to reconstruct the incomplete observations that currently exist.

The general idea of Ron Fablet’s method is based on the principle of learning. If a situation that is observed today corresponds to a previous situation, it is then possible to use the past observations to reconstruct the current data. It is an approach based on analogy.

 

Implementing the model

In his article, Ronan Fablet therefore used an analogy-based model. He characterized the SST based on a law that provides the best representation of its spatial variations. The law that was chosen provides the closest reflection of reality.

In his study, Ronan Fablet used low-resolution SST observations (100km distances between two observations). With low-resolution data, optimum interpolation is usually favored. The goal is to reduce errors in reconstruction (differences between the simulated field and observed field) at the expense of small-scale details. The image obtained through this process has a smooth appearance. However, when the time came for interpolation, the researcher chose to maintain a high level of detail. The only uncertainty that remains is where the given detail is located on the map. This is why he opted for a stochastic interpolation. This method can be used to simulate several examples that will place the detail in different locations. Ultimately, this approach enabled him to create SST fields with the same level of detail throughout, but with the local constraint of the reconstruction error not improving on that of the optimum method.

The proportion of ocean energy within distances under 100km is very significant in the overall balance. At these scales, a lot of interaction takes place between physics and biology. For example, schools of fish and plankton structures are formed under the 100km scale. Maintaining a small-scale level of detail also serves to measure the impact of physics on ecological processes,” explains Ronan Fablet.

 

The blue circle represents the missing data fields. The maps represent the variations in SST at low-resolution based on a model (left), and at high-resolution based on observations (center) and at high resolution based on the model in the article (right).

 

New methods ahead using deep learning

Another modeling method has recently begun to emerge using deep learning techniques. The model designed using this method learns from photographs of the ocean. According to Ronan Fablet, this method is significant: “it incorporates the idea of analogy, in other words, it uses past data to find situations that are similar to the current context. The advantage lies in the ability to create a model based on many parameters that are calibrated by large learning data sets. It would be particularly helpful in reconstructing the missing high-resolution data from geophysical fields observed using remote sensing.”

 

Also read on I’MTech

[one_half]

[/one_half][one_half_last]

[/one_half_last]

[1] Journal of Selected Topics in Applied Earth Observations and Remote Sensing. An IEEE peer-reviewed journal.

Télécom ParisTech, Michèle Wigger, Starting Grant, ERC, 5G, communicating objects

Michèle Wigger: improving communications through coordination

Last September, Michèle Wigger was awarded a Starting Grant from the European Research Council (ERC). Each year, this distinction supports projects led by the best young researchers in Europe. It will enable Michèle Wigger to further develop the work she is conducting at Télécom ParisTech on information and communications theory. She is particularly interested in optimizing information exchanges through cooperation between communicating objects.

 

The European Commission’s objective regarding 5G is clear: the next generation mobile network must be available in at least one major city in each Member state by 2020. However, the rapid expansion of 5G raises questions on network capacity levels. With this fifth-generation system, it is just a matter of time before our smartphones can handle virtual and augmented reality, videos in 4K quality and high definition video games. It is therefore already necessary to start thinking about the quality of service, particularly during peaks in data traffic, which should not hinder loading times for users.

Optimizing the communication of a variety of information is a crucial matter, especially for researchers, who are on the front lines of this challenge. At Télécom ParisTech, Michèle Wigger explores the theoretical aspects of information transmission. One of her research topics is focused on using storage space distributed throughout a network, for example, in various base stations or in terminals from internet access providers — “boxes”. “The idea is to put the data in these areas when traffic is low, during the night for example, so that they are more readily available to the user the next evening during the peaks in network use,” summarizes Michèle Wigger.

Statistical models have shown that it was possible to follow how a video spread geographically, and therefore anticipate, with a few hours in advance, where it will be viewed. Michèle Wigger’s work would therefore enable the smoother use of networks to prevent saturation. Yet she is not only focused on the theoretical aspects behind this method for managing flows. Her research focuses on the physical layer of the networks, in other words, the construction of the modulated signals to be transmitted by antennas to reduce bandwidth usage.

She adds that these communications assisted by cache memory can also go a step further, by using data that is not stocked on our boxes, but on our neighbors’ boxes. “If I want to send a message to two people who are next to each other, it’s much more practical to distribute the information between them both, rather than repeat the same thing to each person.” She explains. To further develop this aspect, Michèle Wigger is exploring power modulations that enable different data to be sent, using only one signal, to two recipients — for example, neighbors — who can then work together collaboratively to exchange the data. “Less bandwidth is therefore required to send the required data to both recipients,” she explains.

 

Improving coordination between connected objects for smart cities

Beyond optimizing communications using cache memories, Michèle Wigger’s research is more generally related to exchanging information between communicating agents. One of the other projects she is developing involves coordination between connected objects. Still focusing on the theoretical aspect, she uses the example of intelligent transportation to illustrate the work she is currently carrying out on the maximum level of coordination that can be established between two communicating entities. “Connected cars want to avoid accidents. In order to accomplish this, what they really want to do is to work together,” she explains.

Read on our blog The autonomous car: safety hinging on a 25cm margin

However, in order to work together, these cars must exchange information using the available networks, which may depend on the technology used by manufacturers or on the environment where they are located. In short, the coordination to be established will not always be implemented in the same manner, since the available network will not always be of the same quality. “I am therefore trying to find the limits of the coordination that is possible based on whether I am working with a weak or even non-existent network, or with a very powerful network,” explains Michèle Wigger.

A somewhat similar issue exists regarding sensors connected to the internet of things, aimed at assisting in decision-making. A typical example is buildings that are subject to risks such as avalanches, earthquakes or tsunamis. Instruments measuring the temperature, vibrations, noise and a variety of other parameters collect data that is sent to decision-making centers that decide whether to issue a warning. Often, the information that is communicated is linked, since the sensors are close together, or because the information is correlated.

In this case, it is important to differentiate the useful information from the repeated information, which does not add much value but still requires resources to be processed. “The goal is to coordinate the sensors so that they transmit the minimum amount of information with the smallest possible probability of error,” explains Michèle Wigger. The end goal is to facilitate the decision-making process.

 

Four focus areas, four PhD students

Her research was awarded a Starting Grant from the European Research Council (ERC) in September both for its promising nature and for its level of quality. A grant of €1.5 million over a five-year period will enable Michèle Wigger to continue to develop a total of four areas of research, all related in one way or another to improving how information is shared with the aim of optimizing communications.

Through the funding from the ERC, she plans to double the size of her team at the information processing and communication laboratory (UMR CNRS and Télécom ParisTech), which will expand to include four new PhD students and two post-doctoral students. She will therefore be able to assign each of these research areas to a PhD student. In addition to expanding her team, Michèle Wigger is planning to develop partnerships. For the first subject addressed here — that of communications assisted by cache memory — she plans to work with INSA Lyon’s Cortexlab platform. This would enable her to test the codes she has created. Testing her theory through experimental results will enable her to further develop her work.

Godefroy Beauvallet, Innovation, Economics

Research and economic impacts: “intelligent together”

What connections currently exist between the world of academic research and the economic sphere? Does the boundary between applied research and fundamental research still have any meaning at a time when the very concept of collaboration is being reinterpreted? Godefroy Beauvallet, Director of Innovation at IMT and Vice Chairman of the National Digital Technology Council provides some possible answers to these questions. During the Digital Technology Meetings of the French National Research Agency (ANR) on November 17, he awarded the Economic Impact Prize to the Trimaran project, which unites Orange, Institut Paul-Langevin, Atos, as well as Télécom Bretagne in a consortium that has succeeded in creating a connection between two worlds that are often wrongly perceived as opposites.

 

 

When we talk about the economic impact of research, what exactly does this mean?

Godefroy Beauvallet: When we talk about economic impact, we’re referring to research that causes a “disruption,” work that transforms a sector by drastically improving a service or product, or the productivity of their development. This type of research affects markets that potentially impact not just a handful, but millions of users, and therefore also directly impact our daily lives.

 

Has it now become necessary to incorporate this idea of economic impact into research?

GB: The role of research institutions is to explore realities and describe them. The economic impacts of their work can be an effective way of demonstrating they have correctly understood these realities. The impacts do not represent the compass, but rather a yardstick—one among others—for measuring whether our contribution to the understanding of the world has changed it or not. At IMT, this is one of our essential missions, since we are under the supervision of the Ministry of the Economy. Yet it does not replace fundamental research, because it is through a fundamental understanding of a field that we can succeed in impacting it economically. The Trimaran project, which was recognized alongside another project during the ANR Digital Technology Meetings, is a good example of this, as it brought together fundamental research on time reversal and issues of energy efficiency in telecommunication networks through the design of very sophisticated antennas.

 

So, for you, applied research and fundamental research do not represent two different worlds?

GB: If we only want a little economic impact, we will be drawn away from fundamental research, but obtaining major economic impacts requires a return to fundamental research, since high technological content involves a profound understanding of the phenomena that are at work. If the objective is to cause a “disruption”, then researchers must fully master the fundamentals, and even discover new ones. It is therefore necessary to pursue the dialectic in an environment where a constant tension exists between exploiting research to reap its medium-term benefits, and truly engaging in fundamental research.

“If the objective is to cause a disruption, then researchers must fully master the fundamentals”

And yet, when it comes to making connections with the economic sphere, some suspicion remains at times among the academic world.

GB: Everyone is talking about innovation these days. Which is wonderful; it shows that the world is now convinced that research is useful! We need to welcome this desire for interaction with a positive outlook, even when it causes disturbances, and without compromising the identity of researchers, who must not be expected to turn into engineers. This requires new forms of collaboration to be created that are suitable for both spheres. But failure to participate in this process would mean researchers having to accept an outside model being imposed on them. Yet researchers are in the best position to know how things should be done, which is precisely why they must become actively involved in these collaborations. So, yes, hesitations still exist. But only in areas where we have not succeeded in being sufficiently intelligent together.

 

Does succeeding in making a major economic impact, finding the disruption, necessarily involve a dialogue between the world of research and the corporate world?

GB: Yes, but what we refer to as “collaboration” or “dialogue” can take different forms. Like the crowdsourcing of innovation, it can provide multiple perspectives and more openness in facing the problems at hand. It is also a reflection of the start-up revolution the world has been experiencing, in which companies are created specifically to explore technology-market pairs. Large companies are also rethinking their leadership role by sustaining an ecosystem that redefines the boundary between what is inside and outside the company. Both spheres are seeking new ways of doing things that do not rely on becoming more alike, but rather on embracing their differences. They have access to tools that propose faster integration, with the idea that there are shortcuts available for working together more efficiently. In our field this translates into an overall transformation of the concept of collaboration, which characterizes this day and age –particularly due to the rise of digital technology.

 

From a practical perspective, these new ways of cooperating result in the creation of new working spaces, such as industrial research chairs, joint laboratories, or simply through projects carried out in partnership with companies. What do these working spaces contribute?

GB: Often, they provide the multi-company context. This is an essential element, since the technology that results from this collaboration is only effective and only has an economic impact if it is used by several companies and permeates an entire market. The company is then under certain short-term requirements, with annual or even quarterly requirements. From this point of view, it is important for the company to work with actors who have a slower, more long-term tempo; to ensure that it will have a resilient long-term strategy. And these spaces work to build trust among the participants: the practices and interactions are tightly regulated legally and culturally, which protects the researchers’ independence. This is the contribution of academic institutions, like Institut Mines-Télécom, and public research funding authorities, like ANR, which provide the spaces and means for inventing collaborations that are fruitful and respectful of each other’s identity.

 

European, Chair on Values and Policies of Personal Information

The Internet of Things in the European Ecosystem

The Internet of Things is fast becoming a vast field of experimentation with possibilities that are yet to be taken advantage of, thanks to major technological advances promoting the miniaturization of sensors and the speed of digital exchanges. It is also thanks to services in our digitalized daily life that there will soon be dozens of these new objects in every European household.

 

The major issues arising from this situation are the prime focus on November 25th of Institut Mines-Télécom’s 12th meeting of the Chair on Values and Policies of Personal Information, organized (in English) in partnership with Contexte, a specialist in European politics.

The morning session will offer the opportunity to listen to four well-known players of the digital ecosystem who are involved in the issues and scope of connected objects on a European scale. They will be debating political, economic and industrial issues.

Godefroy Beauvallet, Director of Innovation for Institut Mines-Télécom, Vice-President of the French Digital Council (CNNum),
Thibaut Kleiner, Information Coordinator for the European Commission for Digital Economy and Society,
Robert Mac Dougall, President within the Alliance for Internet of Things Innovation (AIOTI),
Olivier Ezraty, expert in the Innovation sector and influential blogger.

The afternoon session will focus on two key themes examined from an academic point of view. Firstly, the legal aspects of the Internet of Things, particularly in relation to the implementation of the new European General Data Protection Regulation (GDPR) which will come into effect in May 2018: what impact will this have on designing appliances, the application and the use of the Internet of Things? Next, the societal and philosophical aspects of this new human-machine environment and its issues and implications, on both an individual and collective scale. How will the structure of our societies evolve? What are the advantages, and at what price?

With:
Yann Padova, Auditor at the French Energy Regulation Commission,
Denise Lebeau-Marianna, Head of Data Protection at Baker & McKenzie,
Bernard Benhamou, Secretary General for the Institut de la souveraineté numérique,
Rob van Kranenburg, founder of the Council, promoter for the Internet of Things.

Together with all the Chair on Values and Policies of Personal Information research teams.

 

[toggle title=”Meeting program” state=”close”]

9:00 – Reception

9:30 – Round table: ‘European Internet of Things Ecosystem‘

Who are the stakeholders and what are the issues of this new ecosystem? What are the possible directions on a European scale?

14:00 – Round table: ‘The Internet of Things and the implementation of the European General Data Protection Regulation (GDPR)’

The European General Data Protection Regulation (GDPR) will come into effect in May 2018. What will the main impacts be on the design of objects, applications and services?

15:15 – Round table: ‘Brave New IoT? Societal and ethical aspects of the new man-machine environments’

What will the implications of these technologies be on both an individual and collective level? How will the structure of our societies evolve? What are the advantages, and at what price?

16:15 – Finish[/toggle]

 

12th meeting of the Chair on Values and Policies of Personal Information
The Internet of Things in the European Ecosystem

Friday, November 25th, 2016
Télécom ParisTech 46 rue Barrault, Paris 13e