Ocean Remote sensing, data, IMT Atlantique

Ocean remote sensing: solving the puzzle of missing data

The satellite measurements that are taken every day rely greatly on atmospheric conditions, the main cause of missing data. In a scientific publication, Ronan Fablet, a researcher at Télécom Bretagne, proposes a new method for reconstructing the temperature of the ocean surface to complete incomplete observations. This reconstructed data provides fine-scale mapping of the homogeneous details that are essential in understanding the many different physical and biological phenomena.

 

What do a fish’s migration through the ocean, a cyclone, and the Gulf Stream have in common? They can all be studied using satellite observations. This is a theme Ronan Fablet appreciates. As a researcher at Télécom Bretagne, he is particularly interested in processing satellite data to characterize the dynamics of the ocean. This designation involves several themes, including the reconstruction of incomplete observations. Missing data impairs satellite observations and limits the representation of the ocean, its activities and interactions. This represents essential components used in various areas, from the study of marine biology to ocean-atmosphere exchanges that directly influence the climate. In an article published in June 2016 in the IEEE J-STARS[1] Ronan Fablet proposed a new statistical interpolation approach for compensating for the lack of observations. Let’s take a closer look at the data assimilation challenges in oceanography.

 

Temperature, salinity…: the oceans’ critical parameters

In oceanography, the name of a geophysical field refers to its fundamental parameters of sea surface temperature (or SST), salinity (quantity of salt dissolved in the water), water color, which provides information on the primary production (chlorophyll concentrations), and the altimetric mapping (ocean surface topography).

Ronan Fablet’s article focuses on the SST for several reasons. First of all, the SST is the parameter that is measured the most in oceanography. It benefits from high-precision or high-resolution measurements. In other words, a relatively short distance of one kilometer separates two observed points, unlike salinity measurements, which have a lower level of precision (distance of 100km between two measurement points). Surface temperature is also an input parameter that is often used to design digital models for studying ocean-atmosphere interactions. Many heat transfers take place between the two. One obvious example is cyclones. Cyclones are fed by pumping heat from the oceans’ warmer regions. Furthermore, the temperature is also essential in determining the major ocean structures. It allows surface currents to be mapped on a small-scale.

But how can a satellite measure the sea surface temperature? As a material, the ocean will react differently to a given wavelength. “To study the SST, we can, for example, use an infrared sensor that first measures the energy. A law can then be used to convert this into the temperature,” explains Ronan Fablet.

 

Overcoming the problem of missing data in remote sensing

Unlike the geostationary satellites that orbit at the same speed as the Earth’s rotation, moving satellites generally complete one orbit in a little over 1 hour and 30 minutes. This enables them to fly over several terrestrial points in one day. They therefore build images by accumulating data. Yet some points in the ocean cannot be seen. The main cause of missing data is satellite sensors’ sensitivity to atmospheric conditions. In the case of infrared measurements, clouds block the observations. “In a predefined area, it is sometimes necessary to accumulate two weeks’ worth of observations in order to benefit from enough information to begin reconstructing the given field,” explains Ronan Fablet. In addition, the heterogeneous nature of the cloud cover must be taken into account. “The rate of missing data in certain areas can be as high a 90%,” he explains.

The lack of data is a true challenge. The modelers must find a compromise between the generic nature of the interpolation model and the complexity of its calculations. The problem is that the equations that characterize the movement of fluids, such as water, are not easy to process. This is why these models are often simplified.

 

A new interpolation approach

According to Ronan Fablet, the techniques that are being used do not take full advantage of the available information. The approach he proposes reaches beyond these limits: “we currently have access to 20 to 30 years of SST data. The idea is that among these samples we can find an implicit representation of the ocean variations that can identify an interpolation. Based on this knowledge, we should be able to reconstruct the incomplete observations that currently exist.

The general idea of Ron Fablet’s method is based on the principle of learning. If a situation that is observed today corresponds to a previous situation, it is then possible to use the past observations to reconstruct the current data. It is an approach based on analogy.

 

Implementing the model

In his article, Ronan Fablet therefore used an analogy-based model. He characterized the SST based on a law that provides the best representation of its spatial variations. The law that was chosen provides the closest reflection of reality.

In his study, Ronan Fablet used low-resolution SST observations (100km distances between two observations). With low-resolution data, optimum interpolation is usually favored. The goal is to reduce errors in reconstruction (differences between the simulated field and observed field) at the expense of small-scale details. The image obtained through this process has a smooth appearance. However, when the time came for interpolation, the researcher chose to maintain a high level of detail. The only uncertainty that remains is where the given detail is located on the map. This is why he opted for a stochastic interpolation. This method can be used to simulate several examples that will place the detail in different locations. Ultimately, this approach enabled him to create SST fields with the same level of detail throughout, but with the local constraint of the reconstruction error not improving on that of the optimum method.

The proportion of ocean energy within distances under 100km is very significant in the overall balance. At these scales, a lot of interaction takes place between physics and biology. For example, schools of fish and plankton structures are formed under the 100km scale. Maintaining a small-scale level of detail also serves to measure the impact of physics on ecological processes,” explains Ronan Fablet.

 

The blue circle represents the missing data fields. The maps represent the variations in SST at low-resolution based on a model (left), and at high-resolution based on observations (center) and at high resolution based on the model in the article (right).

 

New methods ahead using deep learning

Another modeling method has recently begun to emerge using deep learning techniques. The model designed using this method learns from photographs of the ocean. According to Ronan Fablet, this method is significant: “it incorporates the idea of analogy, in other words, it uses past data to find situations that are similar to the current context. The advantage lies in the ability to create a model based on many parameters that are calibrated by large learning data sets. It would be particularly helpful in reconstructing the missing high-resolution data from geophysical fields observed using remote sensing.”

 

Also read on I’MTech

[one_half]

[/one_half][one_half_last]

[/one_half_last]

[1] Journal of Selected Topics in Applied Earth Observations and Remote Sensing. An IEEE peer-reviewed journal.

Télécom ParisTech, Michèle Wigger, Starting Grant, ERC, 5G, communicating objects

Michèle Wigger: improving communications through coordination

Last September, Michèle Wigger was awarded a Starting Grant from the European Research Council (ERC). Each year, this distinction supports projects led by the best young researchers in Europe. It will enable Michèle Wigger to further develop the work she is conducting at Télécom ParisTech on information and communications theory. She is particularly interested in optimizing information exchanges through cooperation between communicating objects.

 

The European Commission’s objective regarding 5G is clear: the next generation mobile network must be available in at least one major city in each Member state by 2020. However, the rapid expansion of 5G raises questions on network capacity levels. With this fifth-generation system, it is just a matter of time before our smartphones can handle virtual and augmented reality, videos in 4K quality and high definition video games. It is therefore already necessary to start thinking about the quality of service, particularly during peaks in data traffic, which should not hinder loading times for users.

Optimizing the communication of a variety of information is a crucial matter, especially for researchers, who are on the front lines of this challenge. At Télécom ParisTech, Michèle Wigger explores the theoretical aspects of information transmission. One of her research topics is focused on using storage space distributed throughout a network, for example, in various base stations or in terminals from internet access providers — “boxes”. “The idea is to put the data in these areas when traffic is low, during the night for example, so that they are more readily available to the user the next evening during the peaks in network use,” summarizes Michèle Wigger.

Statistical models have shown that it was possible to follow how a video spread geographically, and therefore anticipate, with a few hours in advance, where it will be viewed. Michèle Wigger’s work would therefore enable the smoother use of networks to prevent saturation. Yet she is not only focused on the theoretical aspects behind this method for managing flows. Her research focuses on the physical layer of the networks, in other words, the construction of the modulated signals to be transmitted by antennas to reduce bandwidth usage.

She adds that these communications assisted by cache memory can also go a step further, by using data that is not stocked on our boxes, but on our neighbors’ boxes. “If I want to send a message to two people who are next to each other, it’s much more practical to distribute the information between them both, rather than repeat the same thing to each person.” She explains. To further develop this aspect, Michèle Wigger is exploring power modulations that enable different data to be sent, using only one signal, to two recipients — for example, neighbors — who can then work together collaboratively to exchange the data. “Less bandwidth is therefore required to send the required data to both recipients,” she explains.

 

Improving coordination between connected objects for smart cities

Beyond optimizing communications using cache memories, Michèle Wigger’s research is more generally related to exchanging information between communicating agents. One of the other projects she is developing involves coordination between connected objects. Still focusing on the theoretical aspect, she uses the example of intelligent transportation to illustrate the work she is currently carrying out on the maximum level of coordination that can be established between two communicating entities. “Connected cars want to avoid accidents. In order to accomplish this, what they really want to do is to work together,” she explains.

Read on our blog The autonomous car: safety hinging on a 25cm margin

However, in order to work together, these cars must exchange information using the available networks, which may depend on the technology used by manufacturers or on the environment where they are located. In short, the coordination to be established will not always be implemented in the same manner, since the available network will not always be of the same quality. “I am therefore trying to find the limits of the coordination that is possible based on whether I am working with a weak or even non-existent network, or with a very powerful network,” explains Michèle Wigger.

A somewhat similar issue exists regarding sensors connected to the internet of things, aimed at assisting in decision-making. A typical example is buildings that are subject to risks such as avalanches, earthquakes or tsunamis. Instruments measuring the temperature, vibrations, noise and a variety of other parameters collect data that is sent to decision-making centers that decide whether to issue a warning. Often, the information that is communicated is linked, since the sensors are close together, or because the information is correlated.

In this case, it is important to differentiate the useful information from the repeated information, which does not add much value but still requires resources to be processed. “The goal is to coordinate the sensors so that they transmit the minimum amount of information with the smallest possible probability of error,” explains Michèle Wigger. The end goal is to facilitate the decision-making process.

 

Four focus areas, four PhD students

Her research was awarded a Starting Grant from the European Research Council (ERC) in September both for its promising nature and for its level of quality. A grant of €1.5 million over a five-year period will enable Michèle Wigger to continue to develop a total of four areas of research, all related in one way or another to improving how information is shared with the aim of optimizing communications.

Through the funding from the ERC, she plans to double the size of her team at the information processing and communication laboratory (UMR CNRS and Télécom ParisTech), which will expand to include four new PhD students and two post-doctoral students. She will therefore be able to assign each of these research areas to a PhD student. In addition to expanding her team, Michèle Wigger is planning to develop partnerships. For the first subject addressed here — that of communications assisted by cache memory — she plans to work with INSA Lyon’s Cortexlab platform. This would enable her to test the codes she has created. Testing her theory through experimental results will enable her to further develop her work.

European, Chair on Values and Policies of Personal Information

The Internet of Things in the European Ecosystem

The Internet of Things is fast becoming a vast field of experimentation with possibilities that are yet to be taken advantage of, thanks to major technological advances promoting the miniaturization of sensors and the speed of digital exchanges. It is also thanks to services in our digitalized daily life that there will soon be dozens of these new objects in every European household.

 

The major issues arising from this situation are the prime focus on November 25th of Institut Mines-Télécom’s 12th meeting of the Chair on Values and Policies of Personal Information, organized (in English) in partnership with Contexte, a specialist in European politics.

The morning session will offer the opportunity to listen to four well-known players of the digital ecosystem who are involved in the issues and scope of connected objects on a European scale. They will be debating political, economic and industrial issues.

Godefroy Beauvallet, Director of Innovation for Institut Mines-Télécom, Vice-President of the French Digital Council (CNNum),
Thibaut Kleiner, Information Coordinator for the European Commission for Digital Economy and Society,
Robert Mac Dougall, President within the Alliance for Internet of Things Innovation (AIOTI),
Olivier Ezraty, expert in the Innovation sector and influential blogger.

The afternoon session will focus on two key themes examined from an academic point of view. Firstly, the legal aspects of the Internet of Things, particularly in relation to the implementation of the new European General Data Protection Regulation (GDPR) which will come into effect in May 2018: what impact will this have on designing appliances, the application and the use of the Internet of Things? Next, the societal and philosophical aspects of this new human-machine environment and its issues and implications, on both an individual and collective scale. How will the structure of our societies evolve? What are the advantages, and at what price?

With:
Yann Padova, Auditor at the French Energy Regulation Commission,
Denise Lebeau-Marianna, Head of Data Protection at Baker & McKenzie,
Bernard Benhamou, Secretary General for the Institut de la souveraineté numérique,
Rob van Kranenburg, founder of the Council, promoter for the Internet of Things.

Together with all the Chair on Values and Policies of Personal Information research teams.

 

[toggle title=”Meeting program” state=”close”]

9:00 – Reception

9:30 – Round table: ‘European Internet of Things Ecosystem‘

Who are the stakeholders and what are the issues of this new ecosystem? What are the possible directions on a European scale?

14:00 – Round table: ‘The Internet of Things and the implementation of the European General Data Protection Regulation (GDPR)’

The European General Data Protection Regulation (GDPR) will come into effect in May 2018. What will the main impacts be on the design of objects, applications and services?

15:15 – Round table: ‘Brave New IoT? Societal and ethical aspects of the new man-machine environments’

What will the implications of these technologies be on both an individual and collective level? How will the structure of our societies evolve? What are the advantages, and at what price?

16:15 – Finish[/toggle]

 

12th meeting of the Chair on Values and Policies of Personal Information
The Internet of Things in the European Ecosystem

Friday, November 25th, 2016
Télécom ParisTech 46 rue Barrault, Paris 13e

Jean-Louis Dessalles, Claude Shannon, Simplicity theory

Simplicity theory: teaching relevance to artificial intelligences

The simplicity theory is founded on humans’ sensitivity to variations in complexity. Something that seems overly simple suddenly becomes interesting. This concept, which was developed by Jean-Louis Dessalles from Télécom ParisTech, challenges Shannon’s probabilistic method for describing certain information. Using this new approach, he can explain events that are otherwise inexplicable, such as creativity, decision-making, coincidence, or “if only I had…” thoughts. And all these concepts could someday be applied to artificial intelligences.

 

How can we pinpoint the factors that determine human interest? This seemingly philosophical question from researcher Jean-Louis Dessalles from Télécom ParisTech, is addressed from the perspective of the information theory. To mark the centenary of Claude Shannon’s birth, the creator of the information theory, his successors will present their recent research at the Institut Henri Poincaré from October 26 to 28. On this occasion, Jean-Louis Dessalles will present his premise which states that the solution to an apparently complex problem can be found in simplicity.

Founded on the work of Claude Shannon

Claude Shannon defined information based on three principles: coding, surprise, and entropy. The latter has the effect of eliminating redundancy, which is generalized by using Kolmogorov complexity. This definition of complexity corresponds to the minimal description size that interests the observer. For example: a message is transmitted, which is identical to one that was previously communicated. Its minimal description consists in stating that it is a copy of the previous message. The complexity therefore decreases, and the information is simplified.

In his research, Jean-Louis Dessalles reuses Shannon’s premises, but, in his opinion, the probabilistic approach is not enough. By way of illustration, the researcher uses the example of the lottery. “Imagine the results from a lottery draw are: “1, 2, 3, 4, 5, 6”. From a probability perspective, there is nothing astonishing about this combination, because its probability is the same as any other combination. However, humans find this sensational, and see it as more astonishing than a random series of numbers in no particular order.” Yet Shannon stated: “what is improbable is interesting“. For Jean-Louis Dessalles, this presents a problem. According to Dessalles, probabilities are unable to represent human interest and the type of information a human being will consider.

The simplicity theory

Jean-Louis Dessalles offers a new cognitive approach that he calls the simplicity theory. This approach does not focus on the minimal description of information, but rather on discrepancies in information. In other words, the difference between what the observer expects and what he or she observes. This is how he redefines Shannon’s concept of surprise. For a human observer, what is expected corresponds to a causal probability. In the lottery example, the observer expects to obtain a set of six numbers that are completely unrelated to each other. However, if the results are “1, 2, 3, 4, 5, 6”, the observer recognizes a logical sequence. This sequence reflects Kolmogorov complexity. Therefore, between the expected combination and the observed combination, the drawing’s degree of description and categorization was simplified. And understandably so, since there is a switch from 6 random numbers to an easily fathomable sequence. The expected complexity of the six numbers of the lottery drawing is much higher than the drawing that was obtained. An event is considered surprising when it is perceived as being simpler to describe than it is to produce.

The simplicity theory was originally developed to account for what people see as interesting in language. The concept of relevance is particularly important here because this word refers to all the elements in information that are worthy of interest, which is something humans can detect very easily. Human interest is made up of two different components: the unique aspect of a situation and the emotion linked to the information. When the simplicity theory is applied, it can help to detect the relevance of news in the media, the subjectivity surrounding an event, or the associated emotional intensity. This emotional reaction depends on the spatio-temporal factor. “An accident that occurs two blocks away from your house will have a greater impact than one that occurs four blocks away, or fifteen blocks, etc. The proximity of a place is characterized by the simplification of the number of bits in its description. The closer it is, the simpler it is,” the researcher explains. And Jean-Louis Dessalles has plenty of examples to illustrate simplicity! Especially since each scenario can be studied in retrospect to better identify what is or is not relevant. This is the very strength of this theory; it characterizes what moves away from the norm and is not usually studied.

Read the  blog post Artificial Intelligence: learning driven by childlike curiosity

Research for artificial intelligence

The ultimate goal of Jean-Louis Dessalles’ theory is to enable a machine to determine what is relevant without explaining it to the machine ahead of time. “AI currently fail in this area. Providing them with this ability would enable them to determine when they must compress the information,” Jean-Louis Dessalles explains. Today these artificial intelligences use a description of statistical relevance, which is often synonymous with importance, but is far removed from relevance as perceived by humans. “AI, which are based on the principle of statistical Machine Learning, are unable to identify the distinctive aspect of an event that creates its interest, because the process eliminates all the aspects that are out of the ordinary,” the researcher explains. The simplicity theory, on the other hand, is able to characterize any event, to such an extent that the theory currently seems limitless. The researcher recommends that relevance be learned as it is learned naturally by children. And beyond the idea of interest, this theory encompasses the computability of creativity, regrets, and the decision-making process. These are all concepts which will be of interest for future artificial intelligence programs.

Read the blog post What is Machine Learning?

[box type=”shadow” align=”” class=”” width=””]

Claude Shannon, code wizard

To celebrate the centenary of Claude Shannon’s birth, the Institut Henri Poincaré is organizing a conference dedicated to the information theory, from October 26 to 28. The conference will explore the mathematician’s legacy in current research, and areas that are taking shape in the field he created. Institut Mines-Télécom, a partner of the event, along with CNRS, UPMC and Labex Carmin, will participate through presentations from four of its researchers: Olivier Rioul, Vincent Gripon, Jean-Louis Dessalles and Gérard Battail.

To find out more about Shannon’s life and work, CNRS has created a website that recounts his journey.[/box]

Read more on our blog

City4age, the elderly-friendly H2020 project

Projets européens H2020In the framework of the European research program H2020, the Institut Mines-Telecom is taking part in the project « City4age ». The latter is meant to offer a smart city model adapted to the elderly. Through non-intrusive technologies, the aim is to improve their quality of life and to facilitate the action of Health Services. The researcher and director of the IPAL[1], Mounir Mokhtari, contributes to the project in the test city of Singapor. Following here, is an interview given by the researcher to the Petitjournal.com, a french media for the French overseas.

 

Mounir Mokhtari, head of the IPAL

Mounir Mokhtari, Director of the IPAL

 

LePetitJournal.com : What are the research areas of and stakes involved in “City4age”?

Mounir Mokhtari : Today, in Europe as in Singapore, the population is ageing and the number of dependent elderly persons is rising sharply; even as the skilled labour force that can look after these people has decreased significantly. The management (of this issue) is often institutionalisation.  Our objective is to maintain the autonomy of this group of people at home and in the city, to improve their quality of life and that of their caregivers (family, friends etc.) by the integration of daily non-intrusive and easy-to-use technologies.

It involves the development of technological systems that motivate elderly persons in frail health to stay more active, to reinforce social ties and to prevent risks.  The objective is to install non-intrusive captors, information systems and communication devices in today’s homes, and to create simple user interfaces with everyday objects such as smartphones, TV screens, tablets, to assist dependent people in their daily living.

 

LPJ : What are the principal challenges in the research?

MM : The first challenge is to identify the normal behavior of the person, to know his / her habits, to be able to detect changes that may be related to a decline in cognitive or motor skills.  This involves the collection of extensive information available through connected objects and lifestyle habits, which we used to define a “user profile”.

Then the data obtained is interpreted and a service provided to the person.  Our objective is not to monitor people but to identify exact areas of interest (leisure, shopping, exercise) and to encourage the person to attend such activities to avoid isolation which could result in the deterioration of his / her quality of life or even health.

For this, we use the tools of decision and system learning, the Machine Learning or Semantic Web.  It’s the same principle, if you like, that Google uses to suggest appropriate search results (graph theory), with an additional difficulty in our case, related to the human factor.  It is all about making a subjective interpretation of behavioural data using machines that have a logical interpretation.  But it is also where the interest of this project lies, besides the strong societal issue.  We work with doctors, psychologists, ergonomists, physio and occupational therapists and social science specialists, etc.

 

LPJ : Can you give us a few simple examples of such an implementation ?

MM : To assist in the maintaining of social ties and activity levels, let’s take the example of an elderly person who has the habit of going to his / her Community Centre and of taking his / her meals at the hawker centre.   If the system detects that this person has reduced his / her outings outside of home, it will generate a prompt to the person to encourage him / her to get out of the home again, for example, “your friends are now at the hawker centre and they are going to eat, you should join them”.  The system can also simultaneously notify the friends on their mobiles that the person has not been out for a long time and to suggest that they visit him/ her for example.

Concerning the elderly who suffer cognitive impairment, we work on the key affected functions that are simple daily activities such as sleeping, hygiene, eating, and risks of falls.  For example, we install motion captors in rooms to detect possible falls.  We equip beds with optic fibre captors to manage the person’s breathing and heart rate to spot potential sleep problems, apnea or cardiac risks, without disturbing the person’s rest.

 

LPJ : An application in Singapore ?

MM : Our research is highly applied, with a strong industry focus and a view to a quick deployment to the end-user.  The solutions developed in the laboratory are proven in a showflat, then in clinical tests.  At the moment, we are carrying out tests at the Khoo Teck Puat hospital to validate our non-intrusive sleep management solutions.

Six pilot sites were chosen to validate in situ the deployment of City4age, including Singapore for testing the maintenance of social ties and activity levels of the elderly, via the Community Centres in HDB neighbourhoods.  The target is a group of around 20 people aged 70 and above, fragile and suffering from mild cognitive impairment, who are integrated in a community – more often in a Senior Activity Centre.  The test also involves the volunteers who help these elderly persons in their community.

 

LPJ : What is your background in Singapore?

MM : My research concentrated mainly on the area of technology that could be used to assist dependent people.  I came to Singapore for the first time in 2004 for the International Conference On Smart Homes and Health Telematics or ICOST which I organised.

I then discovered a scientific ecosystem that I was not aware of (at that period, the focus was turned towards the USA and some European cities).  I was pleasantly surprised by the dynamism, the infrastructure in place and the building of new structures at a frantic pace, and above all, by a country that is very active in the research area of new technologies.

I continued to exchange with Singapore since then and finally decided to join the laboratory IPAL, to which I am seconded by the “Institut Mines-Télécom” since 2009.  I took over the direction of IPAL in 2015 to develop this research.

 

LPJ : What is your view of the MERLION programme?

MM : The PHC MERLION is very relevant and attractive for the creation of new teams. There was an undeniable leverage of French diplomacy and MERLION in the launch of projects and in the consolidation of collaborations with our partners.

The programme brings a framework that creates opportunities and encourages exchanges between researchers and international conference participants and also contributes to the emergence of new collaborations.

Without the MERLION programme, for example, we would not have been able to create the symposium SINFRA (Singapore-French Symposium) in 2009, which has become a biennial event for the laboratory IPAL.  In addition, the theme of « Inclusive Smart Cities and Digital Health » was initiated into IPAL thanks to a MERLION project which was headed by Dr. Dong Jin Song who is today the co-director of IPAL for NUS.

Other than the diplomatic and financial support, the Embassy also participates in IPAL’s activities through making available one of its staff members on a part-time basis, who is integrated into the project team (at IPAL).

 

LPJ : Do you have any upcoming collaborations?

MM : We are planning a new collaboration between IPAL and the University of Bordeaux – which specialises in social sciences – for a behavioural study to help us in our current research.  We are thinking of applying for a new MERLION project in order to kickstart this new collaboration.  It is true that the Social Sciences aspect, despite its importance in the well-being of the elderly and their entourage, is not very well-developed in the laboratory. This PHC MERLION proposal may well have the same leverage as the previous one.

Beyond the European project City4Age, IPAL just signed a research collaboration agreement with PSA Peugeot—Citroën on mobility aspects in the city and well-being with a focus on the management of chronic diseases, such as diabetes and respiratory illnesses.  There is also an ongoing NRF (National Research Foundation) project with NUS (National University of Singapore), led by Dr. Nizar Quarti, a member of IPAL, on mobile and visual robotics.

Interview by Cécile Brosolo (www.lepetitjournal.com/singapour) and translation by Institut Français de Singapour, Ambassade de France à Singapour.

[1] IPAL : Image & Pervasive Access Lab – CNRS’s UMI based in Singapore.

Claude Shannon, a legacy transcending digital technology

Claude Shannon, a major scientist from the second half of the 20th century, marked his era with his communication theory. His work triggered a digital metamorphosis that today affects all levels of our societies. To celebrate what would have been Shannon’s 100th birthday this year, the Institut Henri Poincaré will pay tribute to the scientist with a conference on October 26 to 28. At this event, Olivier Rioul, a researcher at Télécom ParisTech, will provide insight into the identity of this pioneer in the communication field, and will present part of the legacy he left behind.

 

 

Claude Elwood Shannon. The name is not well known by the general public. And yet, if the digital revolution the world is experiencing today had a father, it would doubtless be this man, born in 1916 in a small town in Michigan. His life, which ended in 2001, received little media coverage. Unlike Alan Turing, no Hollywood blockbusters have been dedicated to him. Nor has his identity been mythicized by artistic circles, as was the case for Einstein. “Shannon led an ordinary life, and perhaps that is why nobody talks about him,” observes Olivier Rioul, researcher in digital communications at Télécom ParisTech.

Though his life was not particularly extraordinary, Claude Shannon’s work, on the other hand, was thrilling in many ways. In 1948, he wrote an article entitled A mathematical theory of communication. “Its publication came as revolution in the scientific world,” explains Olivier Rioul. In this article, Claude Shannon introduced the concept of bits of information. He also outlined – for the first time – a schematic diagram of a communication channel, which included all the active parts involved in transmitting a signal, from its source to its destination.

Claude Shannon, Communication Theory, Olivier Rioul

First schematic diagram of a communication system, published by Claude Shannon in 1948. He explained that a channel could be “a coaxial cable, a band of radio frequencies, a beam of light, etc.”

 

Shannon and his magic formula

Yet in addition to his channel diagram, it was above all a formula published in a paper in 1948 that went on to mark the scientific community: C=W.log(1+SNR). With this mathematical expression, Shannon defined the maximum capacity of a transmission channel, in other words, the quantity of information that can be transmitted in a reliable manner. It shows that this capacity depends solely on the channel’s bandwidth, and the relationship between the strength of the transmitted signal and the noise in the channel. Based on these results, every channel has a throughput limit, below which the message transmitted from the transmitter to the receiver is not altered.

Shannon’s strength lies in having succeeded in obtaining this result in a theoretical way,” insists Olivier Rioul. “Shannon did not provide the solution required to reach this limit, but showed that it exists, for all channels.” It would not be until 43 years later, with the work of Berrou, Glavieux and Thitimajshima in 1991, that Shannon’s limit would be nearly reached for the first time with the development of turbo codes.

Olivier Rioul believes the story behind this formula is out of the ordinary, and has been the subject of many historical approximations. “And the time was ripe. In 1948 – the year in which Claude Shannon made his results public – seven other scientists published similar formulas,” he explains, based on research carried out with José Carlos Magossi on the history of this formula.

However, the results obtained by Shannon’s peers were sometimes inaccurate and sometimes inspired by Shannon’s prior work, and therefore not very original. And all of them were part of the same environment, were in contact with each other or participated in the same conferences. All except Jacques Laplume, a French engineer who obtained a correct formula similar to Shannon’s at almost the same time. Yet what he lacked and what kept him from leaving his mark on history was the enormous contribution of the rest of Shannon’s theory.

Read the blog post What are turbo codes?

A pioneer in communications, but that’s not all…

While his work represents the beginnings of modern digital communications, Claude Shannon also left behind a much greater legacy. In 1954, behavioral psychologist Paul Fitts published his law, named after him, which is used to model human movements. In his scientific article, he explicitly cited Shannon’s theorem, referring to his channel capacity formula. “Today we use Fitt’s formula to study human-computer interactions,” explains Olivier Rioul, who worked with a PhD student on reconciling this law with Shannon’s theory.

The scope of Shannon’s work therefore far exceeds the realms of information and communication theory. As a lover of games, he developed one of the first machines for playing chess. He was also one of the pioneers of artificial intelligence and machine learning, with his demonstration in 1950 of a robotic mouse that could find its way through a labyrinth and remember the optimal route.

Although Shannon’s life was not necessarily extraordinary in the literal sense, he was undeniably an extraordinary man. As for his lack of fame – which the centenary celebration of his birth seeks to remedy – he himself had said, referring to his information theory, “In the beginning I didn’t think it would have an enormous impact. I enjoyed working on these problems, just like I enjoyed working on lots of other problems, without any ulterior motives for money or fame. And I think that a lot of other scientists have this same approach, they work because of their love of the game.”

 

[box type=”shadow” align=”” class=”” width=””]

Claude Shannon, code wizard

To celebrate the centenary of Claude Shannon’s birth, the Institut Henri Poincaré is organizing a conference dedicated to the information theory, from October 26 to 28. The conference will explore the mathematician’s legacy in current research, and areas that are taking shape in the field he created. Institut Mines-Télécom, a partner of the event, along with CNRS, UPMC and Labex Carmin, will participate through presentations from four of its researchers: Olivier Rioul, Vincent Gripon, Jean-Louis Dessalles and Gérard Battail.

To find out more about Shannon’s life and work, CNRS has created a website that recounts his journey.[/box]

Read more on our blog :

Sea Tech Week, René Garello, Océan connecté

Sea Tech Week: Key issues of a connected ocean

The sea is becoming increasingly connected, with the development of new real-time transmission sensors. The aggregated data is being used to improve our understanding of the role oceans play in climate issues, but several challenges must be considered: the development of autonomous sensors and the pooling of research on a global level. This was the subject of Sea Tech Week, which took place in Brest from October 10 to 14, bringing together international experts from different disciplines relating to the sea.

 

From renewable marine energies and natural resources to tourism… The sea has many uses – we swim in it, travel on it and exploit all it has to offer. By 2030, the ocean economy (or blue economy) is expected to create great wealth and many jobs. In the meantime, before we reach that distant point in the future, Télécom Bretagne is combining expertise in oceanography with information and communication technology in order to further research in this field.

A global topic

Although the subject was left out of climate conferences up until 2015, the ocean is constantly interacting with the environments around it. It is at the very heart of the subject of climate change. In fact, it is currently the largest carbon sink in existence, leading to acidification, with irreparable consequences for marine fauna and flora.

In this current context, there is an increasing need for measurements. The aim is to obtain an overview of the ocean’s effects on the environment and vice versa. All different types of parameters must be studied to obtain this global view: surface temperatures, salinity, pressure, biological and chemical variables, and the influence of human activities, such as maritime traffic. René Garello,  a researcher at Télécom Bretagne, in a presentation delivered on the first morning of Sea Tech Week, explained the challenges involved in integrating all this new data.

A connected ocean for shared research

The study of the marine world is not immune to recent trends: it must be connected. The goal is to use technological resources to allow large volumes of data to be transmitted by developing coding. This involves adapting aspects of connected object technology to the marine environment.

One challenge involved in the connected ocean field is the development of sophisticated and more efficient sensors to improve in-situ observation techniques. René Garello refers to them as smart sensors. Whether they are used to examine surface currents, or acoustic phenomena, these sensors must be able to transmit data quickly, be autonomous, and communicate with each other.

However, communication is necessary for more than just the sensors. Scientific communities also have their part to play. “On the one hand, we make measurements, and on the other we make models. The question is whether or not what is carried out in a given context is pooled with other measurements carried out elsewhere, allowing it to be integrated to serve the same purpose,” explains René Garello.

Another challenge is therefore to prevent the fragmentation of research which would benefit from being correlated. The goal is to pool both data and stakeholders by bringing together chemical oceanographers and physical oceanographers, modelers and experimenters, with the ultimate aim of better orchestrating global research.

A parallel concern: Big Data

Currently, only 2% of data is used. We are forced to subsample the data, which means we are less efficient,” observes René Garello. The need to collect as much material as possible is counterbalanced by the human capacity to analyze the material in its entirety. In addition, the data must be stored and processed in different ways. According to René Garello, future research must be carried out in a restrained manner: “Big Data leads to a paradox, because the goal of the research is to decrease data size so users receive a maximum amount of information in minimum amount of space.” Smart sensors can allow a balance to be struck between data compression and Big Data by using an input filtering process, and by not collecting all data, so that work can be carried out on a human scale.

Towards standardization procedures

Not many standards currently exist in the marine sphere,. The question of data integrity and how it represents reality is the last major issue. Satellite sensors are already properly codified, since their measurements are carried out in an environment in which the measurement conditions are stable, unlike in-situ sensors, which can be dragged away by drifting objects and buoys. In this context of mobile resources, the sample must be proven reliable through the prior calibration of the measurement. Research can help to improve this concept of standards.

However, basic research alone is not sufficient. The future also requires links to be forged between science, technology and industry. In a report published in April 2016, the OECD foresees the creation of many ocean-related industries (transport, fishing, marine biotechnology, etc.). How will current research help this blue economy to take shape? From the local context in Brest, to European research programs such as AtlantOS, these issues clearly exist within the same context: everything is interconnected.

 

[box type=”shadow” align=”” class=”” width=””]

Sea Tech Week 2016

Sea Tech Week : A week dedicated to marine sciences and technology

Every two years in Brest, workshops and trade shows are organized in relation to sea-related disciplines. The week is organized by the Brest metropolitan area with support from several research and corporate partners. In 2014, over 1,000 participants arrived in Brittany for this event, 40% of whom were international visitors. In 2016, the event focused on digital technology, in connection with the French Tech label, and addressed the following topics: observation, robotics, modeling, sensors… via 18 conferences led by experts from around the world. Find out more  [/box]

Mai Nguyen, Artificial Intelligence, Children's curiosity

Artificial Intelligence: learning driven by children’s curiosity

At Télécom Bretagne, Mai Nguyen is drawing on the way children learn in order to develop a new form of artificial intelligence. She is hoping to develop robots capable of adapting to their environment by imitating the curiosity humans have at the start of their lives.

 

During the first years of life, humans develop at an extremely fast pace. “Between zero and six years of age, a child learns to talk, walk, draw and communicate…” explains Mai Nguyen, a researcher at Télécom Bretagne. This scientist is trying to better understand this rapid development, and reproduce it using algorithms. From this position at the interface between cognitive science and programming, Mai Nguyen is seeking to give robots a new type of artificial intelligence.

During the earliest stages of development, children do not go to school, and their parents are not constantly sharing their knowledge with them. According to the researcher, while learning does occur sporadically in an instructive manner — through the vertical flow of information from educator to learner — it primarily takes place as children explore their environment, driven by their own curiosity. “The work of psychologists has shown that children themselves choose the activities through which they increase their capacities the most, often through games, by trying a wide variety of things.”

 

A mix of all learning techniques

This method of acquiring skills could be seen as progressing through trial and error. “Trial and error situations usually involve a technique that is adopted in order to acquire a specific skill requested by a third party,” she explains. However, Mai Nguyen points out that learning through curiosity goes beyond this situation, enabling the acquisition of several skills, with the child learning without specific instructions.

In cognitive sciences and robotics, the technical name for this curiosity is intrinsic motivation. Mai Nguyen utilizes this motivation to program robots capable of independently deciding how they should acquire a set of skills. Thanks to the researcher’s algorithms, “the robot chooses what it must learn, and decides how to do it,” she explains. It will therefore be able to identify—on its own— an appropriate human or machine contact from whom it can seek advice.

Likewise, it will decide on its own if it should acquire a skill through trial and error, or if it would be better to learn from a knowledgeable human. “Learning by intrinsic motivation is in fact the catalyst for existing methods,” explains Mai Nguyen.

 

Robots better adapted to their environment

There are several advantages to copying a child’s progress in early development and applying the same knowledge and skill acquisition mechanisms to robots. “The idea is to program an object that is constantly learning by adapting to its environment,” explains the researcher. This approach is a departure from the conventional approach in which the robot leaves the factory in a completed state, with defined skills that will remain unchanged for its entire existence.

In Mai Nguyen’s view, this second approach has many limitations. Particularly the variability of the environment: “The robot can learn, with supervision, to recognize a table and chairs, but in a real home, these objects are constantly being moved and deteriorating… How can we ensure it will be able to identify them without making mistakes?” However, learning through intrinsic motivation enables the robot do adapt to an unknown situation, and customize its knowledge based on the environment.

The variability is not only related to space; it can also include time. A human user’s demands on the robot are not the same as they were ten years ago. There is no reason to believe they will be the same ten years from now. An adaptive robot therefore has a longer lifespan vis-à-vis human societal changes than a pre-defined object.

 

Mai Nguyen, AI, children

Learning through curiosity allows Mai Nguyen and her colleagues to develop robots capable of learning tasks in a hierarchical fashion.

Data sana in robot sano

It seems difficult for supervised automatic learning to compete with artificial intelligence led by intrinsic motivation. Mai Nguyen reports on recent experiments involving the replacement of faulty robots developed with automatic learning: “When an initial robot ceased to be operational, we took all of its data and transferred it to an exact copy.” This resulted in a second robot that did not work well either.

This phenomenon can be explained by the concept of the incarnation of robots developed using automatic learning. Each body is linked to data acquired during the conditioning procedure. This is a problem that “curious” robots do not face, since they acquire data in an intelligent manner, by collecting data that is customized to their body and environment, while at the same time limiting the data’s volume and acquisition time.

 

When will curious robots become a part of our daily lives?

Artificial intelligence that adapts to environments to this extent is especially promising in its potential for home care services and improving the quality of life. Mai Nguyen and her colleagues at Télécom Bretagne are working on many of these applications. Such robots could become precious helpers for the elderly and for disabled individuals. Their development is still relatively recent from a scientific and technological point of view. Although the first psychological theories on intrinsic motivation date back to the 1960s, their transposition to robots has only begun to take place over the past fifteen years.

The scientific community, which has been working on the issue, has already obtained conclusive results. By causing robots with artificial intelligence to interact, scientists observed that they were able to develop common languages. While these languages were based on different vocabulary during each trial, it always enabled them to converge on situations in which the artificial intelligences could communicate together, after starting from scratch. It’s a little like imagining humans from different cultures and languages ending up together on a desert island.

 

[box type=”shadow” align=”” class=”” width=””]

Artificial intelligence at Institut Mines-Télécom

The 8th Fondation Télécom brochure, published in June 2016, is dedicated to artificial intelligence (AI). It presents an overview of the research underway in this area throughout the world, and presents the vast amount of existing research underway at Institut Mines-Télécom schools. This 27-page brochure defines intelligence (rational, naturalistic, systematic, emotional, kinesthetic…), looks back at the history of AI, questions the emerging potential, and looks at how it can be used by humans.

[/box]

 

Vincent Gripon, IMT Atlantique, Information science, Artificial Intelligence, AI

When information science assists artificial intelligence

The brain, information science, and artificial intelligence: Vincent Gripon is focusing his research at Télécom Bretagne on these three areas. By developing models that explain how our cortex stores information, he intends to inspire new methods of unsupervised learning. On October 4, he will be presenting his research on the renewal of artificial intelligence at a conference organized by the French Academy of Sciences. Here, he gives us a brief overview of his theory and its potential applications.

 

Since many of the developments in artificial intelligence are based on progress made in statistical learning, what can be gained from studying memory and how information is stored in the brain?

Vincent Gripon: There are two approaches to artificial intelligence. The first approach is based on mathematics. It involves formalizing a problem from the perspective of probabilities and statistics, and describing the objects or behaviors to be understood as random variables. The second approach is bio-inspired. It involves copying the brain, based on the principle that it represents the only example of intelligence that can be used for inspiration. The brain is not only capable of making calculations, it is above all able to index, research, and compare information. There is no doubt that these abilities play a fundamental role in every cognitive task, including the most basic.

 

In light of your research, do you believe that the second approach has greater chances of producing results?

VG: After the Dartmouth conference of 1956 – the moment at which artificial intelligence emerged as a research field – the emphasis was placed on the mathematical approach. 60 years later, there are mixed results, and many problems that could be solved by a 10-year-old child cannot be solved using machines. This partly explains the period of stagnation experienced by the discipline in the 1990s. The revival we have seen over the past few years can be explained by the renewed interest in the artificial neural networks approach, particularly due to the rapid increase in available computing power. The pragmatic response is to favor these methods, in light of the outstanding performance achieved using neuro-inspired methods, which comes close to and sometimes even surpasses that of the human cortex.

 

How can information theory – usually more focused on error correcting codes in telecommunications than neurosciences – help you imitate an intelligent machine?

VG: When you think about it, the brain is capable of storing information, sometimes for several years, despite the constant difficulties it must face: the loss of neurons and connections, extremely noisy communication, etc. Digital systems face very similar problems, due to the miniaturization and multiplication of components. The information theory proposes a paradigm that addresses the problems related to information storage and transfer, which applies to any system, biological or otherwise. The concept of information is also indispensable for any form of intelligence, in addition to calculation, which very often receives greater attention.

 

What model do you base your work on?

VG: As an information theorist, I start from the premise that robust information is redundant information. By applying this principle to the brain, we see that information is stored by several neurons, or several micro-columns, which are clusters of neurons, on the distributed error correcting code model. One model that offers outstanding robustness is the clique model. A clique is made up of several — at least four — micro-columns, which are all interconnected. The advantage of this model is that even when one connection is lost, they can all still communicate. This is a distinctive redundancy property. Also, two micro-columns can be part of several cliques. Therefore, every connection supports several items of information, and every item of information is supported by several connections. This dual property ensures the mechanism’s robustness and its great diversity of storage.

“Robust information is redundant”

How has this theory been received by the community of brain specialists?

 VG: It is very difficult to build bridges between the medical field and the mathematical or computer science field. We do not have the same vocabulary and intentions. For example, it is not easy to get our neuroscientist colleagues to admit that information can be stored in the very structure of the connections of the neural network, and not in its mass. The best way of communicating remains biomedical imaging, in which the models are confronted with reality, which can facilitate their interpretation.

 

Do the observations made via imaging offer hope regarding the validity of your theories?

VG: We work specifically with the laboratory for signal and image processing (LTSI) of Université de Rennes I to validate our models. To bring them to light, the cerebral activation of subjects performing cognitive tasks is observed using electroencephalography. The goal is not to directly validate our theories, since the required spatial resolution is not currently attainable. The goal is rather to verify the macroscopic properties they predict. For example, one of the consequences of the neural clique model is that a positive correlation exists between the topological distance between the representation of objects in the neocortex and the semantic distance between the same objects. Specifically, the representations of two similar objects will share many of the same micro-columns. This has been confirmed through imaging. Of course, this does not validate the theory, but it can cause us to modify it or carry out new experiments to test it.

 

To what extent is your model used in artificial intelligence?

VG: We focus on problems related to machine learning — typically on object recognition. Our model offers good results, especially in the area of unsupervised learning, in other words, without a human expert helping the algorithm to learn by indicating what it must find. Since we focus on memory, we target different applications than those usually targeted in this field. For example, we look closely at an artificial system’s capacity to learn with few examples, to learn many different objects or learn in real-time, discovering new objects at different times. Therefore, our approach is complementary, rather than being in competition with other learning methods.

 

The latest developments in artificial intelligence are now focused on artificial neural networks — referred to as “deep learning”. How does your position complement this learning method?

VG: Today, deep neural networks achieve outstanding levels of performance, as highlighted in a growing number of scientific publications. However, their performance remains limited in certain contexts. For example, these networks require an enormous amount of data in order to be effective. Also, once a neural network is trained, it is very difficult to get the network to take new parameters into account. Thanks to our model, we can enable incremental learning to take place: if a new type of object appears, we can teach it to a network that has already been trained. In summary, our model is not as good for calculations and classification, but better for memorization. A clique network is therefore perfectly compatible with a deep neural network.

 

[box type=”shadow” align=”” class=”” width=””]

Artificial intelligence at the French Academy of Sciences

Because artificial intelligence now combines approaches from several different scientific fields, the French Academy of Sciences is organizing a conference on October 4 entitled “The Revival of Artificial Intelligence”. Through presentations by four researchers, this event will present the different facets of the discipline. In addition to the presentation by Vincent Gripon, which combines informational neuroscience, the conference will also feature presentations by Andrew Blake (Alan Turing Institute) on learning machines, Yann LeCun (Facebook, New York University) on deep learning, and Karlheinz Meier (Heidelberg University) on brain-derived computer architecture.

Practical information:
Conference on “The Revival of Artificial Intelligence”
October 4, 2016 at 2:00pm
Large meeting room at Institut de France
23, quai de Conti, 75006 Paris

[/box]

Trubo codes, Claude Berrou, Quèsaco, IMT Atlantique

What are turbo codes?

Turbo codes form the basis of mobile communications in 3G and 4G networks. Invented in 1991 by Claude Berrou, and published in 1993 with Alain Glavieux and Punya Thitimajshima, they have now become a reference point in the field of information and communication technologies. As Télécom Bretagne, birthplace of these “error-correcting codes”, prepares to host the 9th international symposium on turbo codes, let’s take a closer look at how these codes work and the important role they play in our daily lives.

 

What do error-correcting codes do?

In order for communication to take place, three things are needed: a sender, a receiver, and a channel. The most common example is that of a person who speaks, sending a signal to someone who is listening, by means of the air conveying the vibrations and forming the sound wave. Yet problems can quickly arise in this communication if other people are talking nearby – making noise.

To compensate for this difficulty, the speaker may decide to yell the message. But the speaker could also avoid shouting, by adding a number after each letter in the message, corresponding to the letter’s place in the alphabet. The listener receiving the information will then have redundant information for each part of the message — in this case, double the information. If noise alters the way a letter is transmitted, the number can help to identify it.

And what role do turbo codes play in this?

In the digital communications sector, there are several error-correcting codes, with varying levels of complexity. Typically, repeating the same message several times in binary code is a relatively safe bet, yet it is extremely costly in terms of bandwidth and energy consumption.

Turbo codes are a much more developed way of integrating information redundancy. They are based on the transmission of the initial message in three copies. The first copy is the raw, non-encoded information. The second is modified by encoding each bit of information using an algorithm shared by the coder and decoder. Finally, another version of the message is also encoded, but after modification (specifically, a permutation). In this third case, it is no longer the original message that is encoded and then sent, but rather a transformed version. These three versions are then decoded and compared in order to find the original message.

Where are turbo codes used?

In addition to being used to encode all our data in 3G and 4G networks, turbo codes are also used in many different fields. NASA uses them for its communication with space probes which have been built since 2003. The space community, which has to contend with many constraints on communication processes, is particularly fond of these codes, as ESA also uses them for many of its probes. But more generally, turbo codes represent a safe and efficient encoding technique in most communication technologies.

Claude Berrou, inventor of turbo codes

How have turbo codes become so successful?

In 1948, American engineer and mathematician Claude Shannon proposed a theorem stating that codes always exist that are capable of minimizing channel-related transmission errors, up to a certain level of disturbance. In other words, Shannon asserted that, despite the noise in a channel, the transmitter will always be able to transmit an item of information to the receiver, almost error-free, when using efficient codes.

The turbo codes developed by Claude Berrou in 1991 meet these requirements, and are close to the theoretical limit for information transmitted with an error rate close to zero. Therefore, they represent highly efficient error-correcting codes. His experimental results, which validated Shannon’s theory, earned Claude Berrou the Marconi Prize in 2005 – the highest scientific distinction in the field of communication sciences. His research earned him a permanent membership position in the French Academy of Sciences.

 

[box type=”info” align=”” class=”” width=””]

Did you know?

The international alphabet (or the NATO phonetic alphabet) is an error-correcting code. Every letter is in fact encoded as word beginning with that letter. Thus ‘N’ and ‘M’ become ‘November’ and ‘Mike’. This technique prevents much confusion, particularly in radio communications, which often involve noise.[/box]