European, Chair on Values and Policies of Personal Information

The Internet of Things in the European Ecosystem

The Internet of Things is fast becoming a vast field of experimentation with possibilities that are yet to be taken advantage of, thanks to major technological advances promoting the miniaturization of sensors and the speed of digital exchanges. It is also thanks to services in our digitalized daily life that there will soon be dozens of these new objects in every European household.

 

The major issues arising from this situation are the prime focus on November 25th of Institut Mines-Télécom’s 12th meeting of the Chair on Values and Policies of Personal Information, organized (in English) in partnership with Contexte, a specialist in European politics.

The morning session will offer the opportunity to listen to four well-known players of the digital ecosystem who are involved in the issues and scope of connected objects on a European scale. They will be debating political, economic and industrial issues.

Godefroy Beauvallet, Director of Innovation for Institut Mines-Télécom, Vice-President of the French Digital Council (CNNum),
Thibaut Kleiner, Information Coordinator for the European Commission for Digital Economy and Society,
Robert Mac Dougall, President within the Alliance for Internet of Things Innovation (AIOTI),
Olivier Ezraty, expert in the Innovation sector and influential blogger.

The afternoon session will focus on two key themes examined from an academic point of view. Firstly, the legal aspects of the Internet of Things, particularly in relation to the implementation of the new European General Data Protection Regulation (GDPR) which will come into effect in May 2018: what impact will this have on designing appliances, the application and the use of the Internet of Things? Next, the societal and philosophical aspects of this new human-machine environment and its issues and implications, on both an individual and collective scale. How will the structure of our societies evolve? What are the advantages, and at what price?

With:
Yann Padova, Auditor at the French Energy Regulation Commission,
Denise Lebeau-Marianna, Head of Data Protection at Baker & McKenzie,
Bernard Benhamou, Secretary General for the Institut de la souveraineté numérique,
Rob van Kranenburg, founder of the Council, promoter for the Internet of Things.

Together with all the Chair on Values and Policies of Personal Information research teams.

 

[toggle title=”Meeting program” state=”close”]

9:00 – Reception

9:30 – Round table: ‘European Internet of Things Ecosystem‘

Who are the stakeholders and what are the issues of this new ecosystem? What are the possible directions on a European scale?

14:00 – Round table: ‘The Internet of Things and the implementation of the European General Data Protection Regulation (GDPR)’

The European General Data Protection Regulation (GDPR) will come into effect in May 2018. What will the main impacts be on the design of objects, applications and services?

15:15 – Round table: ‘Brave New IoT? Societal and ethical aspects of the new man-machine environments’

What will the implications of these technologies be on both an individual and collective level? How will the structure of our societies evolve? What are the advantages, and at what price?

16:15 – Finish[/toggle]

 

12th meeting of the Chair on Values and Policies of Personal Information
The Internet of Things in the European Ecosystem

Friday, November 25th, 2016
Télécom ParisTech 46 rue Barrault, Paris 13e

Pollutants, Département SAGE, Mines Douai, Frédéric Thévenet, COV

Removing pollutants from our homes

Indoor air is polluted with several volatile organic compounds, some of which are carcinogenic. Frédéric Thévenet, a researcher at Mines Douai, develops solutions for trapping and eliminating these pollutants, and for improving tests for air purifying devices.

 

We spend nearly 90% of our time inside: at home, at the office, at school, or in our car. Yet the air is not as clean as we think – it contains a category of substances called volatile organic compounds (VOCs), some of which are harmful. Fighting these VOCs is Frédéric Thévenet’s mission. Frédéric is a researcher with the Department of Atmospheric Sciences and Environmental Engineering (SAGE) at Mines Douai, a lab specialized in analytical chemistry capable of analyzing trace molecules.

 

Proven carcinogens

VOCs are gaseous organic molecules emitted in indoor environments from construction materials, paint and glue on furniture, cleaning and hygiene products, and even from cooking. One specific molecule is a particular cause for concern: formaldehyde, both a proven carcinogen and the compound with the highest concentration levels. Guideline values exist (concentration levels that must not be exceeded) for formaldehyde, but they are not yet mandatory.

The first way to reduce VOCs is through commonsense measures: limit sources by choosing materials and furniture with low emissions, choose cleaning products carefully and, above all, ventilate frequently with outdoor air. But sometimes this is not enough. This is where Frédéric Thévenet comes into play: he develops solutions for eliminating these VOCs.

 

Trap and destroy

There are two methods for reducing VOCs in the air. They can be trapped on a surface through adsorption (the molecules bind irreversibly to the surface), and the traps are then replenished. The compounds can also be trapped and destroyed immediately, generally through oxidation, by using light (photocatalysis). “But in this case, you must make sure the VOCs have been completely destroyed; they decompose into water and CO2, which are harmless,” the researcher explains. “Sometimes the VOCs are only partially destroyed, thus generating by-products that are also dangerous.”

 

Polluants, Frédéric Thévenet, Mines Douai, Département SAGE

 

At the SAGE Department, Frédéric works in complementary fashion with his colleagues from the VOC metrology team. They take their measurement devices to the field. He prefers to reproduce the reality of the field in the laboratory: he created an experimental room measuring 40 cubic meters, called IRINA (Innovative Room for INdoor Air studies), where he recreates different types of atmospheres and tests procedures for capturing and destroying VOCs. These procedures are at varying stages of development: Frédéric tests technology already available on the market that the ADEME (The French Environment and Energy Management Agency) wants to evaluate, as well as adsorbent materials developed by manufacturers who are looking to improve the composition. He also works on even earlier stages, developing his own solutions in the laboratory. “For example, we test the regeneration of adsorbents using different techniques, particularly with plasma,” he explains.

 

[box type=”shadow” align=”” class=”” width=””]

A long-overdue law

Only laws and standards will force manufacturers to develop effective solutions for eliminating volatile organic compounds. Yet current legislation is not up to par. Decree no. 2011-1727 of 2 December 2011 on guideline values for formaldehyde and benzene in indoor air provides that the concentration levels of these two VOCs must not exceed certain limits in establishments open to the public: 30 µg/m³ for formaldehyde and 5 µg/m³ for benzene, for long-term exposure. However, this law has not yet come into force, since the decrees implementing this measure have not yet been issued. The number of locations affected by this law make it very difficult to implement. The law’s implementation has been postponed until 2018, and even this date remains uncertain.

Furthermore, the Decree of 19 April 2011 on labelling volatile pollutant emissions for construction products, wall cladding, floor coverings, and paint and varnishes is aimed at better informing consumers on VOC emissions from construction materials, paint and varnishes. These products must include a label indicating the emission levels for 11 substances, on a four-category scale ranging from A+ to C, based on the energy label model for household appliances.[/box]

 

Improving the standards

What are the results? For now, the most interesting results are related to adsorbent construction materials, for example, when they are designed to become VOC traps. “They don’t consume energy, and show good results in terms of long-term trapping, despite variations due to seasonal conditions (temperature and humidity),” explains Frédéric. “When these materials are well designed, they do not release the emissions they trap.” All these materials are tested in realistic conditions, by verifying how these partitions perform when they are painted, for example.

As well as testing the materials themselves, the research is also aimed at improving the standards governing anti-VOC measures, which seek to come as close as possible to real operating conditions. “We were able to create a list of precise recommendations for qualifying the treatments,” the researcher adds. The goal was to obtain standards that truly prove the devices’ effectiveness. Yet today, this is far from the case. An investigation published in the magazine Que Choisir in May 2013 showed that most of the air purifiers sold in stores were ineffective, or even negatively affected the air quality by producing secondary pollutants. There was therefore an urgent need to establish a more scientific approach in this area.

 

Polluants, Frédéric Thévenet, MInes Douai

A passion for research

For some, becoming a researcher is the fulfilment of a childhood dream. Others are led to the profession through chance and the people they happen to meet. Frédéric Thévenet did not initially see himself as a researcher. His traditional career path, taking preparatory classes for an engineering school (Polytech’ Lyon), was initially leading him towards a future in engineering. Yet a chance meeting caused him to change his mind. During his second year at Polytech’, he did an internship at a research lab under the supervision of Dominique Vouagner, a researcher who was passionate about her work at the Institut Lumière Matière (ILM), a joint research unit affiliated with the Claude Bernard University Lyon 1 and CNRS. “I thought it was wonderful, the drive to search, to question, the experimental aspect… It inspired me to earn my DEA (now a Master 2) and apply for a thesis grant.” He was awarded a grant from ADEME on the subject of air treatment… although his studies had focused on material sciences. Still, it was a logical choice, since materials play a key role in capturing pollutants. Frédéric does not regret this choice: “Research is a very inspiring activity, involving certain constraints, but also much room for freedom and creativity.”

Christian Roux, IMT, Humanities, Social Sciences

The major transformations of the 21st century: “the humanities and social sciences are essential”

New materials, artificial intelligences, green energy, virtual reality, 5G… so many new innovations are impacting our society. The transformations they bring about result in changes to organizations, and redefine the role humans play in their environment, in both the professional and private realms. According to Christian Roux, Executive VP for Research and Innovation at IMT, this aspect must not be overlooked. He defends a systemic, multidisciplinary approach to the digital, productive and environmental transitions that are taking place. Here, he shares his view of the role the humanities and social sciences play in the reflections on these issues, giving us an enticing sneak-peak of the upcoming “Society, Business, Economy: Transformation in Progress” symposium that IMT is organizing on November 3 and 4 to present the latest developments in this area.

 

 

You believe that a systemic approach to the transitions caused by new technologies is essential. Why?

Christian Roux: Only a global approach can produce solutions that truly meet needs. A co-development approach that includes all the issues is therefore essential. Involving the humanities and social sciences in questions of technological innovations provides an opportunity for questioning their relevance, which prevents situations in which designers realize too late that a product or service is completely out of step with users’ needs. A very practical example of this is the industry of the future — or industry 4.0 — which is transforming processes through technologies such as augmented reality, which will change operators’ practices, by guiding their movements, for example. If we do not consider the human factor—the people who are the users—there’s a good chance the solution will miss the intended objective. The humanities and social sciences are therefore essential.

 

Is business a priority research area for IMT in the humanities and social sciences?

ChR: Our key areas are connected to the complex context of the major transformations that are taking place. Because businesses are particularly impacted by these transitions, it is naturally an area of interest. Companies are increasingly beginning to think in terms of networks, outside the walls. This raises the question of new forms of organization and creates heightened tension among the various areas within a company, such as in logistics. A new form also demands more responsible management, with an expected level of performance in this area as well. In general, companies undergo many changes due to the very far-reaching digitization. The concept of value is challenged, and there is a need to understand what it really is. This leads to a redefinition of the components that have traditionally made up a company, such as production, the use of this value, or design.

The question of design is also a major focus of our research. What changes are made to the decisive individual and collective processes involved in the various design phases of a product or service? This is the type of design and innovation question that our researchers are working on. Our interactions with the corporate ecosystem in this area are very valuable, particularly anything related to fab labs, open innovation, etc.

 

The corporate world is part of the human environment, but digitization also affects the personal sphere. What issues are your researchers exploring from that perspective?

ChR: The ethical aspects of technological innovations are an obvious issue. For example, the issues of the governance of algorithms, for example, is directly linked to questions on artificial intelligences. Humans are also part of new networks of connected objects, but what is their role in these networks? This is the question the Télécom École de Management’s Social Networks and Connected Objects Chair is seeking to answer.

The individual’s position as a consumer is also being redefined. This is the field that has been opened by the reflections on digital labor, in which crowdsourcing platforms are emerging as work methods, such as YouTube. What compensation does the user expect in return in this type of situation?

 

The issues that you mention involve regulatory aspects. What role can an institute like IMT play in this area? 

ChR: Our role is situated upstream of regulations. It involves advising public authorities and informing the debate on regulatory mechanisms. We can offer keys for analysis and understanding. The social sciences are fully integrated in this area, since the issue of regulation involves the concept of social compromise. Once the regulations have been established, we must be able to analyze them, especially for the purpose of studying and explaining the inadequacies. A good example of this is data management, in which a compromise must be found between protecting privacy and creating value. This is the purpose of the Chair on Values and Policies of Personal Information, which brings three of our schools together to focus on this issue, while the Innovation & Regulation Chair at Télécom ParisTech also studies these issues very closely.

 

How can these skills be integrated into a systemic approach?

ChR: First of all, the chairs that were mentioned above bring together several skills and disciplines, and involve industrial stakeholders who present their problems. We then develop places for experimentation, like the living labs, which are spaces where we can analyze human behaviors in a variety of controlled technological contexts. For IMT, the systematic approach represents, above all, the legacy of the French engineering training method, which is very broad-based from both a technological stand point and from the perspective of the humanities and social sciences, enabling the development of practical solutions to the problems at hand. Over time, this approach has inevitably been applied to the research we are conducting. Some of our schools are now over two hundred years old, and have always maintained a close connection with the corporate world and society.

 

Jean-Louis Dessalles, Claude Shannon, Simplicity theory

Simplicity theory: teaching relevance to artificial intelligences

The simplicity theory is founded on humans’ sensitivity to variations in complexity. Something that seems overly simple suddenly becomes interesting. This concept, which was developed by Jean-Louis Dessalles from Télécom ParisTech, challenges Shannon’s probabilistic method for describing certain information. Using this new approach, he can explain events that are otherwise inexplicable, such as creativity, decision-making, coincidence, or “if only I had…” thoughts. And all these concepts could someday be applied to artificial intelligences.

 

How can we pinpoint the factors that determine human interest? This seemingly philosophical question from researcher Jean-Louis Dessalles from Télécom ParisTech, is addressed from the perspective of the information theory. To mark the centenary of Claude Shannon’s birth, the creator of the information theory, his successors will present their recent research at the Institut Henri Poincaré from October 26 to 28. On this occasion, Jean-Louis Dessalles will present his premise which states that the solution to an apparently complex problem can be found in simplicity.

Founded on the work of Claude Shannon

Claude Shannon defined information based on three principles: coding, surprise, and entropy. The latter has the effect of eliminating redundancy, which is generalized by using Kolmogorov complexity. This definition of complexity corresponds to the minimal description size that interests the observer. For example: a message is transmitted, which is identical to one that was previously communicated. Its minimal description consists in stating that it is a copy of the previous message. The complexity therefore decreases, and the information is simplified.

In his research, Jean-Louis Dessalles reuses Shannon’s premises, but, in his opinion, the probabilistic approach is not enough. By way of illustration, the researcher uses the example of the lottery. “Imagine the results from a lottery draw are: “1, 2, 3, 4, 5, 6”. From a probability perspective, there is nothing astonishing about this combination, because its probability is the same as any other combination. However, humans find this sensational, and see it as more astonishing than a random series of numbers in no particular order.” Yet Shannon stated: “what is improbable is interesting“. For Jean-Louis Dessalles, this presents a problem. According to Dessalles, probabilities are unable to represent human interest and the type of information a human being will consider.

The simplicity theory

Jean-Louis Dessalles offers a new cognitive approach that he calls the simplicity theory. This approach does not focus on the minimal description of information, but rather on discrepancies in information. In other words, the difference between what the observer expects and what he or she observes. This is how he redefines Shannon’s concept of surprise. For a human observer, what is expected corresponds to a causal probability. In the lottery example, the observer expects to obtain a set of six numbers that are completely unrelated to each other. However, if the results are “1, 2, 3, 4, 5, 6”, the observer recognizes a logical sequence. This sequence reflects Kolmogorov complexity. Therefore, between the expected combination and the observed combination, the drawing’s degree of description and categorization was simplified. And understandably so, since there is a switch from 6 random numbers to an easily fathomable sequence. The expected complexity of the six numbers of the lottery drawing is much higher than the drawing that was obtained. An event is considered surprising when it is perceived as being simpler to describe than it is to produce.

The simplicity theory was originally developed to account for what people see as interesting in language. The concept of relevance is particularly important here because this word refers to all the elements in information that are worthy of interest, which is something humans can detect very easily. Human interest is made up of two different components: the unique aspect of a situation and the emotion linked to the information. When the simplicity theory is applied, it can help to detect the relevance of news in the media, the subjectivity surrounding an event, or the associated emotional intensity. This emotional reaction depends on the spatio-temporal factor. “An accident that occurs two blocks away from your house will have a greater impact than one that occurs four blocks away, or fifteen blocks, etc. The proximity of a place is characterized by the simplification of the number of bits in its description. The closer it is, the simpler it is,” the researcher explains. And Jean-Louis Dessalles has plenty of examples to illustrate simplicity! Especially since each scenario can be studied in retrospect to better identify what is or is not relevant. This is the very strength of this theory; it characterizes what moves away from the norm and is not usually studied.

Read the  blog post Artificial Intelligence: learning driven by childlike curiosity

Research for artificial intelligence

The ultimate goal of Jean-Louis Dessalles’ theory is to enable a machine to determine what is relevant without explaining it to the machine ahead of time. “AI currently fail in this area. Providing them with this ability would enable them to determine when they must compress the information,” Jean-Louis Dessalles explains. Today these artificial intelligences use a description of statistical relevance, which is often synonymous with importance, but is far removed from relevance as perceived by humans. “AI, which are based on the principle of statistical Machine Learning, are unable to identify the distinctive aspect of an event that creates its interest, because the process eliminates all the aspects that are out of the ordinary,” the researcher explains. The simplicity theory, on the other hand, is able to characterize any event, to such an extent that the theory currently seems limitless. The researcher recommends that relevance be learned as it is learned naturally by children. And beyond the idea of interest, this theory encompasses the computability of creativity, regrets, and the decision-making process. These are all concepts which will be of interest for future artificial intelligence programs.

Read the blog post What is Machine Learning?

[box type=”shadow” align=”” class=”” width=””]

Claude Shannon, code wizard

To celebrate the centenary of Claude Shannon’s birth, the Institut Henri Poincaré is organizing a conference dedicated to the information theory, from October 26 to 28. The conference will explore the mathematician’s legacy in current research, and areas that are taking shape in the field he created. Institut Mines-Télécom, a partner of the event, along with CNRS, UPMC and Labex Carmin, will participate through presentations from four of its researchers: Olivier Rioul, Vincent Gripon, Jean-Louis Dessalles and Gérard Battail.

To find out more about Shannon’s life and work, CNRS has created a website that recounts his journey.[/box]

Read more on our blog

City4age, the elderly-friendly H2020 project

Projets européens H2020In the framework of the European research program H2020, the Institut Mines-Telecom is taking part in the project « City4age ». The latter is meant to offer a smart city model adapted to the elderly. Through non-intrusive technologies, the aim is to improve their quality of life and to facilitate the action of Health Services. The researcher and director of the IPAL[1], Mounir Mokhtari, contributes to the project in the test city of Singapor. Following here, is an interview given by the researcher to the Petitjournal.com, a french media for the French overseas.

 

Mounir Mokhtari, head of the IPAL

Mounir Mokhtari, Director of the IPAL

 

LePetitJournal.com : What are the research areas of and stakes involved in “City4age”?

Mounir Mokhtari : Today, in Europe as in Singapore, the population is ageing and the number of dependent elderly persons is rising sharply; even as the skilled labour force that can look after these people has decreased significantly. The management (of this issue) is often institutionalisation.  Our objective is to maintain the autonomy of this group of people at home and in the city, to improve their quality of life and that of their caregivers (family, friends etc.) by the integration of daily non-intrusive and easy-to-use technologies.

It involves the development of technological systems that motivate elderly persons in frail health to stay more active, to reinforce social ties and to prevent risks.  The objective is to install non-intrusive captors, information systems and communication devices in today’s homes, and to create simple user interfaces with everyday objects such as smartphones, TV screens, tablets, to assist dependent people in their daily living.

 

LPJ : What are the principal challenges in the research?

MM : The first challenge is to identify the normal behavior of the person, to know his / her habits, to be able to detect changes that may be related to a decline in cognitive or motor skills.  This involves the collection of extensive information available through connected objects and lifestyle habits, which we used to define a “user profile”.

Then the data obtained is interpreted and a service provided to the person.  Our objective is not to monitor people but to identify exact areas of interest (leisure, shopping, exercise) and to encourage the person to attend such activities to avoid isolation which could result in the deterioration of his / her quality of life or even health.

For this, we use the tools of decision and system learning, the Machine Learning or Semantic Web.  It’s the same principle, if you like, that Google uses to suggest appropriate search results (graph theory), with an additional difficulty in our case, related to the human factor.  It is all about making a subjective interpretation of behavioural data using machines that have a logical interpretation.  But it is also where the interest of this project lies, besides the strong societal issue.  We work with doctors, psychologists, ergonomists, physio and occupational therapists and social science specialists, etc.

 

LPJ : Can you give us a few simple examples of such an implementation ?

MM : To assist in the maintaining of social ties and activity levels, let’s take the example of an elderly person who has the habit of going to his / her Community Centre and of taking his / her meals at the hawker centre.   If the system detects that this person has reduced his / her outings outside of home, it will generate a prompt to the person to encourage him / her to get out of the home again, for example, “your friends are now at the hawker centre and they are going to eat, you should join them”.  The system can also simultaneously notify the friends on their mobiles that the person has not been out for a long time and to suggest that they visit him/ her for example.

Concerning the elderly who suffer cognitive impairment, we work on the key affected functions that are simple daily activities such as sleeping, hygiene, eating, and risks of falls.  For example, we install motion captors in rooms to detect possible falls.  We equip beds with optic fibre captors to manage the person’s breathing and heart rate to spot potential sleep problems, apnea or cardiac risks, without disturbing the person’s rest.

 

LPJ : An application in Singapore ?

MM : Our research is highly applied, with a strong industry focus and a view to a quick deployment to the end-user.  The solutions developed in the laboratory are proven in a showflat, then in clinical tests.  At the moment, we are carrying out tests at the Khoo Teck Puat hospital to validate our non-intrusive sleep management solutions.

Six pilot sites were chosen to validate in situ the deployment of City4age, including Singapore for testing the maintenance of social ties and activity levels of the elderly, via the Community Centres in HDB neighbourhoods.  The target is a group of around 20 people aged 70 and above, fragile and suffering from mild cognitive impairment, who are integrated in a community – more often in a Senior Activity Centre.  The test also involves the volunteers who help these elderly persons in their community.

 

LPJ : What is your background in Singapore?

MM : My research concentrated mainly on the area of technology that could be used to assist dependent people.  I came to Singapore for the first time in 2004 for the International Conference On Smart Homes and Health Telematics or ICOST which I organised.

I then discovered a scientific ecosystem that I was not aware of (at that period, the focus was turned towards the USA and some European cities).  I was pleasantly surprised by the dynamism, the infrastructure in place and the building of new structures at a frantic pace, and above all, by a country that is very active in the research area of new technologies.

I continued to exchange with Singapore since then and finally decided to join the laboratory IPAL, to which I am seconded by the “Institut Mines-Télécom” since 2009.  I took over the direction of IPAL in 2015 to develop this research.

 

LPJ : What is your view of the MERLION programme?

MM : The PHC MERLION is very relevant and attractive for the creation of new teams. There was an undeniable leverage of French diplomacy and MERLION in the launch of projects and in the consolidation of collaborations with our partners.

The programme brings a framework that creates opportunities and encourages exchanges between researchers and international conference participants and also contributes to the emergence of new collaborations.

Without the MERLION programme, for example, we would not have been able to create the symposium SINFRA (Singapore-French Symposium) in 2009, which has become a biennial event for the laboratory IPAL.  In addition, the theme of « Inclusive Smart Cities and Digital Health » was initiated into IPAL thanks to a MERLION project which was headed by Dr. Dong Jin Song who is today the co-director of IPAL for NUS.

Other than the diplomatic and financial support, the Embassy also participates in IPAL’s activities through making available one of its staff members on a part-time basis, who is integrated into the project team (at IPAL).

 

LPJ : Do you have any upcoming collaborations?

MM : We are planning a new collaboration between IPAL and the University of Bordeaux – which specialises in social sciences – for a behavioural study to help us in our current research.  We are thinking of applying for a new MERLION project in order to kickstart this new collaboration.  It is true that the Social Sciences aspect, despite its importance in the well-being of the elderly and their entourage, is not very well-developed in the laboratory. This PHC MERLION proposal may well have the same leverage as the previous one.

Beyond the European project City4Age, IPAL just signed a research collaboration agreement with PSA Peugeot—Citroën on mobility aspects in the city and well-being with a focus on the management of chronic diseases, such as diabetes and respiratory illnesses.  There is also an ongoing NRF (National Research Foundation) project with NUS (National University of Singapore), led by Dr. Nizar Quarti, a member of IPAL, on mobile and visual robotics.

Interview by Cécile Brosolo (www.lepetitjournal.com/singapour) and translation by Institut Français de Singapour, Ambassade de France à Singapour.

[1] IPAL : Image & Pervasive Access Lab – CNRS’s UMI based in Singapore.

Claude Shannon, a legacy transcending digital technology

Claude Shannon, a major scientist from the second half of the 20th century, marked his era with his communication theory. His work triggered a digital metamorphosis that today affects all levels of our societies. To celebrate what would have been Shannon’s 100th birthday this year, the Institut Henri Poincaré will pay tribute to the scientist with a conference on October 26 to 28. At this event, Olivier Rioul, a researcher at Télécom ParisTech, will provide insight into the identity of this pioneer in the communication field, and will present part of the legacy he left behind.

 

 

Claude Elwood Shannon. The name is not well known by the general public. And yet, if the digital revolution the world is experiencing today had a father, it would doubtless be this man, born in 1916 in a small town in Michigan. His life, which ended in 2001, received little media coverage. Unlike Alan Turing, no Hollywood blockbusters have been dedicated to him. Nor has his identity been mythicized by artistic circles, as was the case for Einstein. “Shannon led an ordinary life, and perhaps that is why nobody talks about him,” observes Olivier Rioul, researcher in digital communications at Télécom ParisTech.

Though his life was not particularly extraordinary, Claude Shannon’s work, on the other hand, was thrilling in many ways. In 1948, he wrote an article entitled A mathematical theory of communication. “Its publication came as revolution in the scientific world,” explains Olivier Rioul. In this article, Claude Shannon introduced the concept of bits of information. He also outlined – for the first time – a schematic diagram of a communication channel, which included all the active parts involved in transmitting a signal, from its source to its destination.

Claude Shannon, Communication Theory, Olivier Rioul

First schematic diagram of a communication system, published by Claude Shannon in 1948. He explained that a channel could be “a coaxial cable, a band of radio frequencies, a beam of light, etc.”

 

Shannon and his magic formula

Yet in addition to his channel diagram, it was above all a formula published in a paper in 1948 that went on to mark the scientific community: C=W.log(1+SNR). With this mathematical expression, Shannon defined the maximum capacity of a transmission channel, in other words, the quantity of information that can be transmitted in a reliable manner. It shows that this capacity depends solely on the channel’s bandwidth, and the relationship between the strength of the transmitted signal and the noise in the channel. Based on these results, every channel has a throughput limit, below which the message transmitted from the transmitter to the receiver is not altered.

Shannon’s strength lies in having succeeded in obtaining this result in a theoretical way,” insists Olivier Rioul. “Shannon did not provide the solution required to reach this limit, but showed that it exists, for all channels.” It would not be until 43 years later, with the work of Berrou, Glavieux and Thitimajshima in 1991, that Shannon’s limit would be nearly reached for the first time with the development of turbo codes.

Olivier Rioul believes the story behind this formula is out of the ordinary, and has been the subject of many historical approximations. “And the time was ripe. In 1948 – the year in which Claude Shannon made his results public – seven other scientists published similar formulas,” he explains, based on research carried out with José Carlos Magossi on the history of this formula.

However, the results obtained by Shannon’s peers were sometimes inaccurate and sometimes inspired by Shannon’s prior work, and therefore not very original. And all of them were part of the same environment, were in contact with each other or participated in the same conferences. All except Jacques Laplume, a French engineer who obtained a correct formula similar to Shannon’s at almost the same time. Yet what he lacked and what kept him from leaving his mark on history was the enormous contribution of the rest of Shannon’s theory.

Read the blog post What are turbo codes?

A pioneer in communications, but that’s not all…

While his work represents the beginnings of modern digital communications, Claude Shannon also left behind a much greater legacy. In 1954, behavioral psychologist Paul Fitts published his law, named after him, which is used to model human movements. In his scientific article, he explicitly cited Shannon’s theorem, referring to his channel capacity formula. “Today we use Fitt’s formula to study human-computer interactions,” explains Olivier Rioul, who worked with a PhD student on reconciling this law with Shannon’s theory.

The scope of Shannon’s work therefore far exceeds the realms of information and communication theory. As a lover of games, he developed one of the first machines for playing chess. He was also one of the pioneers of artificial intelligence and machine learning, with his demonstration in 1950 of a robotic mouse that could find its way through a labyrinth and remember the optimal route.

Although Shannon’s life was not necessarily extraordinary in the literal sense, he was undeniably an extraordinary man. As for his lack of fame – which the centenary celebration of his birth seeks to remedy – he himself had said, referring to his information theory, “In the beginning I didn’t think it would have an enormous impact. I enjoyed working on these problems, just like I enjoyed working on lots of other problems, without any ulterior motives for money or fame. And I think that a lot of other scientists have this same approach, they work because of their love of the game.”

 

[box type=”shadow” align=”” class=”” width=””]

Claude Shannon, code wizard

To celebrate the centenary of Claude Shannon’s birth, the Institut Henri Poincaré is organizing a conference dedicated to the information theory, from October 26 to 28. The conference will explore the mathematician’s legacy in current research, and areas that are taking shape in the field he created. Institut Mines-Télécom, a partner of the event, along with CNRS, UPMC and Labex Carmin, will participate through presentations from four of its researchers: Olivier Rioul, Vincent Gripon, Jean-Louis Dessalles and Gérard Battail.

To find out more about Shannon’s life and work, CNRS has created a website that recounts his journey.[/box]

Read more on our blog :

Bioplastics, Mines Douai

Bioplastics: “still a long road to higher performance”

As required by environmental transition, materials of the future must be “greener”. Bioplastics in particular have become a main focus of attention, and are often presented as the solution to the pollution caused by the plastics we use every day, which can take hundreds of years to decompose. Patricia Krawczak, a researcher at Mines Douai, studies these new polymers. Yet she issues this warning: our expectations must remain reasonable, because it will take time for bioplastics to become efficient and profitable… and not all of them are biodegradable.

 

Plastic materials are inextricably linked to our everyday lives. They are essential, and yet are often seen as a scourge of modern times. Their negative impact on the environment is often denounced, such as the millions of tons[1] of polymer waste disposed of in the oceans each year, negatively impacting marine biodiversity. Not to mention that producing these plastics requires hydrocarbons, and hence the use of fossil fuels. The scientific community is seeking to offer alternative solutions in response to this situation: “agro-based” or “bio-sourced” plastics made from natural materials of plant or animal origin, also referred to as bioplastics. At Mines Douai, this new “green” form of plastics processing is one of the key research areas of the TPCIM department directed by Patricia Krawczak.

The current challenge is to develop bio-sourced polymers with higher added value, to set them apart from the widely-distributed plastics, called commodity plastics — such as polyolefins. The goal is to compete against technical plastics, or performance plastics, from the traditional, petrochemical derived process — such as polyamides and polycarbonates,” the materials researcher explains. These major polymer families Patricia Krawczak mentions are often used in key sectors, such as transportation (automotive, aeronautics, etc.), which are large-scale consumers of plastics. Yet entering these markets proves to be a difficult task, due to the demanding specifications.

Herein lies one of bioplastics’ greatest challenges: proving, if not their superiority, at least their equal performance compared to conventional polymers under strict operating conditions. Yet this is far from always the case. “For the time being, industrial-scale bio-sourced products are primarily used for applications in the low value-added packaging sector, such as  bags for supermarkets,” the scientist explains. The properties of the majority of these bioplastics are not yet adapted for producing vehicle components, such as under-the-hood automotive parts, which must be resistant to high temperatures and constant or repeated mechanical stress over time.

This is why much work remains to be done before certain attractive properties can be achieved, and explains the need to temper the excitement about bioplastics. Patricia Krawczak is very clear on this point: “We cannot yet compete with one hundred years of research in the field of petrochemical plastics processing. The road to high performance is still long for bio-sourced plastics.

The “conventional” plastics industry has indeed been successful in developing a wide range of materials, able to meet the thermo-mechanical and physico-chemical demands of specific uses, and comply with strict application specifications. The range is much larger than what the few bioplastics currently being produced can offer. Not to mention the fact that these bioplastics sometimes have unattractive psychosensorial properties (smells, colors, transparency). A cloudy or yellowish appearance can make certain applications unacceptable, such as for food packaging or touchscreens; and the foul-smelling compounds generated during processing or during use can be disturbing.

However, this does not mean that bioplastics will be forever confined to markets for low value-added products. But hopes of quickly replacing all plastics from petroleum fractions with bioplastics should be tempered for the time being. However, a few examples do exist of bioplastics offering very good properties or even new functions, and are winning over plastics processing industrials and purchasers. This is the case for a bio-sourced polymer developed by Mitsubishi and marketed under the name of Durabio. Its impact resistance is comparable to that of conventional polycarbonate, as well having a high degree of transparency and excellent optical properties (resistance to UV yellowing) and surface properties (hardness, scratch and abrasion-resistance) that surpass its petroleum-based counterparts, and justify its price.

 

Bioplastics need to keep up with the pace!

One of the major hurdles to overcome — in addition to having the characteristics required to comply with application specifications — is that of the potential additional cost of using bioplastics. Bio-sourced polymers’ access to downstream markets is in fact subject to an inescapable condition: to remain competitive, manufacturers of plastic parts cannot consider investing in new production methods or substantially modifying their existing machinery. “It is therefore crucial to ensure that bioplastics can be integrated into current production lines, with technical performances, and production costs and speed that are compatible with market constraints,Patricia Krawczak points out. Yet, this is not an easy task. Why? Because certain bio-sourced polymers are sensitive to thermal or thermomechanical degradation during the forming stages for manufactured products.

 

Mines Douai, Patricia Krawczak, Bioplastics

To bring bioplastics to maturity, researchers must make them compatible with current processes.

 

It is therefore up to bioplastics to adapt to the plastics processing procedures used to manufacture industrial parts, not the other way around. For the scientists, this means modifying the plastics’ behavior in liquid form, specifically by adding chemical additives. “A common example is starch, which cannot be processed in its raw state using conventional extrusion methods. It must be plasticized by adding water or polyols, with the aim of lowering the temperature at which it becomes liquid,” the researcher explains. Another approach being explored is mixing bio-sourced polymers to obtain a blend tailored to the specific characteristics required.

Once the appropriate formula has been developed, the work is not yet over. The possible migration of the various additives, or the potential changes to the morphology of the blends during the processing stage must also be controlled, to ensure optimal functional properties. In short, developing bioplastics requires a great deal of optimization.

 

Bio-sourced does not necessarily mean biodegradable

Once the bioplastics are perfectly adapted to current plastic processing procedures, and have become efficient and competitive, it is important to keep the initial goal in mind: reducing the environmental impact. However, green plastics processing is all too often wrongly associated with developing and processing biodegradable plastics. Patricia Krawczak reminds us that green polymers do not necessarily have this feature: “On the contrary, many applications in transportation (cars, airplanes) and construction require durable materials that can be used in the long-term without any form of deterioration.

Since not all bioplastics are biodegradable, they must be recovered and recycled. And there is no guarantee we will be able to put them in our recycling bins at home. In France, these recycling bins currently only accept a limited number of specific plastics: polyethylene terephthalate, polyethylene and polypropylene. It may not be possible to recycle the new biopolymers using the same facilities. Studies must now be carried out to determine whether or not these biopolymers can be integrated into existing recycling facilities without any disruption, or to determine if new facilities will need to be created.

The problem is, the proportion of biopolymers in the total volume of the plastics produced and consumed in the global market represents only 0.5% of all different types (and an estimated 2% by 2020). “Establishing a recycling program generally requires the generation of a sufficient volume of waste to enable a sustainable economy to be built on the collection, sorting and reutilization procedures. At present, however, the amount of bioplastic waste is too small, and is too diverse,” Patricia Krawczak warns. However, initiatives are being developed to recycle small volumes of waste. This is one of the subjects being discussed by the Circular Economy & Innovation Chair (ECOCIRNOV) led by Mines Douai.

 

A promising future for green plastics?

Research aimed at removing the remaining obstacles is advancing, and the future looks promising for green plastics processing, as it is driven by application sectors with strong potential. In addition to transportation, the biomedical field is interested in biocompatible materials for creating controlled release systems for active ingredients. Patricia Krawczak’s team has worked on this subject in conjunction with a French research group on biomaterials from Nord Pas-de-Calais (Fédération Biomatériaux et Dispositifs Médicaux Fonctionnalisés du Nord Pas-de-Calais). The development of electroactive bio-sourced polymers suitable for 3D printing – the focus of research led by Jérémie Soulestin in one of Patricia Krawczak’s research groups – could also benefit the market for connected objects.

Finally, it is important to remember that polymers, along with fibers, constitute one of the two essential components required for producing composite materials. Chung-Hae Park, also a member of Patricia Krawczak’s team, is already working on the development of flax-based composites. He recently completed the proof of concept for the high-speed manufacturing of parts, with a cycle time of two minutes, close to automotive speeds (one part per minute). Success in offering biopolymers with suitable properties, reinforced with plant fibers, could therefore constitute another step towards developing completely bio-sourced structural composites. This class of materials could potentially have numerous high-performance applications.

 

[1] The United Nations Environment Program published a report in 2016 indicating that between 4.8 and 12.7 million tonnes of plastic were dumped in the world’s seas.

Sea Tech Week, René Garello, Océan connecté

Sea Tech Week: Key issues of a connected ocean

The sea is becoming increasingly connected, with the development of new real-time transmission sensors. The aggregated data is being used to improve our understanding of the role oceans play in climate issues, but several challenges must be considered: the development of autonomous sensors and the pooling of research on a global level. This was the subject of Sea Tech Week, which took place in Brest from October 10 to 14, bringing together international experts from different disciplines relating to the sea.

 

From renewable marine energies and natural resources to tourism… The sea has many uses – we swim in it, travel on it and exploit all it has to offer. By 2030, the ocean economy (or blue economy) is expected to create great wealth and many jobs. In the meantime, before we reach that distant point in the future, Télécom Bretagne is combining expertise in oceanography with information and communication technology in order to further research in this field.

A global topic

Although the subject was left out of climate conferences up until 2015, the ocean is constantly interacting with the environments around it. It is at the very heart of the subject of climate change. In fact, it is currently the largest carbon sink in existence, leading to acidification, with irreparable consequences for marine fauna and flora.

In this current context, there is an increasing need for measurements. The aim is to obtain an overview of the ocean’s effects on the environment and vice versa. All different types of parameters must be studied to obtain this global view: surface temperatures, salinity, pressure, biological and chemical variables, and the influence of human activities, such as maritime traffic. René Garello,  a researcher at Télécom Bretagne, in a presentation delivered on the first morning of Sea Tech Week, explained the challenges involved in integrating all this new data.

A connected ocean for shared research

The study of the marine world is not immune to recent trends: it must be connected. The goal is to use technological resources to allow large volumes of data to be transmitted by developing coding. This involves adapting aspects of connected object technology to the marine environment.

One challenge involved in the connected ocean field is the development of sophisticated and more efficient sensors to improve in-situ observation techniques. René Garello refers to them as smart sensors. Whether they are used to examine surface currents, or acoustic phenomena, these sensors must be able to transmit data quickly, be autonomous, and communicate with each other.

However, communication is necessary for more than just the sensors. Scientific communities also have their part to play. “On the one hand, we make measurements, and on the other we make models. The question is whether or not what is carried out in a given context is pooled with other measurements carried out elsewhere, allowing it to be integrated to serve the same purpose,” explains René Garello.

Another challenge is therefore to prevent the fragmentation of research which would benefit from being correlated. The goal is to pool both data and stakeholders by bringing together chemical oceanographers and physical oceanographers, modelers and experimenters, with the ultimate aim of better orchestrating global research.

A parallel concern: Big Data

Currently, only 2% of data is used. We are forced to subsample the data, which means we are less efficient,” observes René Garello. The need to collect as much material as possible is counterbalanced by the human capacity to analyze the material in its entirety. In addition, the data must be stored and processed in different ways. According to René Garello, future research must be carried out in a restrained manner: “Big Data leads to a paradox, because the goal of the research is to decrease data size so users receive a maximum amount of information in minimum amount of space.” Smart sensors can allow a balance to be struck between data compression and Big Data by using an input filtering process, and by not collecting all data, so that work can be carried out on a human scale.

Towards standardization procedures

Not many standards currently exist in the marine sphere,. The question of data integrity and how it represents reality is the last major issue. Satellite sensors are already properly codified, since their measurements are carried out in an environment in which the measurement conditions are stable, unlike in-situ sensors, which can be dragged away by drifting objects and buoys. In this context of mobile resources, the sample must be proven reliable through the prior calibration of the measurement. Research can help to improve this concept of standards.

However, basic research alone is not sufficient. The future also requires links to be forged between science, technology and industry. In a report published in April 2016, the OECD foresees the creation of many ocean-related industries (transport, fishing, marine biotechnology, etc.). How will current research help this blue economy to take shape? From the local context in Brest, to European research programs such as AtlantOS, these issues clearly exist within the same context: everything is interconnected.

 

[box type=”shadow” align=”” class=”” width=””]

Sea Tech Week 2016

Sea Tech Week : A week dedicated to marine sciences and technology

Every two years in Brest, workshops and trade shows are organized in relation to sea-related disciplines. The week is organized by the Brest metropolitan area with support from several research and corporate partners. In 2014, over 1,000 participants arrived in Brittany for this event, 40% of whom were international visitors. In 2016, the event focused on digital technology, in connection with the French Tech label, and addressed the following topics: observation, robotics, modeling, sensors… via 18 conferences led by experts from around the world. Find out more  [/box]

music, Marc Bourreau, Télécom ParisTech

Digital transition: the music industry reviews its progress

The music industry – the sector hit hardest by digitization – now seems to have completed the transformation that was initiated by digital technology. With the rise of streaming music, there has been a shift in the balance of power. Producers now look for revenue from sources other than record sales, and algorithms constitute the differentiating factor in the wide range of offers available to consumers.

 

Spotify, Deezer, Tidal, Google Music and Apple Music… Streaming is now the new norm in music consumption. Albums have never been so digitized; to an extent that raises the question: has the music industry reached the end of its digital transformation? After innumerable attempts to launch new economic models in the 2000s, such as purchases of individual tracks through iTunes, and voluntary contribution as for the Radiohead album In Rainbows, etc., streaming music directly online seems to have emerged as a lasting solution.

Marc Bourreau is an economist at Télécom ParisTech and runs the Innovation and Regulation Chair,[1] which is organizing a conference on September 28 on the topic: “Music – the end of the digital revolution?”. In his opinion, despite many artists’ complaints about the low level of royalties they receive from streaming plays, this is a durable model. Based on the principle of proportional payment for plays — Spotify pays 70% of its revenue to rights holders — streaming is now widely accepted by producers and consumers.

Nevertheless, the researcher sees areas of further development for the model. “The main problem is that the monthly subscriptions to these services represent an average annual investment that exceeds consumer spending before the streaming era,” explains Marc Bourreau. With initial subscriptions of around €10 per month, streaming platforms represent annual spending of €120 for subscribers – twice the average amount consumers used to spend annually on record purchases.

Because sites like Deezer use a freemium model, in which those who do not subscribe have access to the service in exchange for being exposed to ads, this observation enabled the researcher to confirm that an average music consumer will not subscribe to the premium offers proposed by these platforms. The investment is indeed too high for these consumers. To win over this target group, which constitutes a major economic opportunity, “one of the future challenges for streaming websites will be to offer lower rates,” Marc Bourreau explains.

 

Streaming platforms: where is the value?

However, rights holders may not agree with this strategy. “I think the platforms would like to lower their prices, but record companies also require a certain threshold, since they are dependent on the sales revenue generated by subscriptions,” explains Marc Bourreau. This decision prevents any type of price competition from taking place. The platforms must therefore find new differentiating features. But where?

In their offerings, perhaps? The researcher disagrees: “The catalog proposed by streaming sites is almost identical, except for a few details. This is not really a differentiating feature.” Or what about the sound quality, then? This seems possible considering the analogy with the streaming video industry, for which Netflix charges a higher fee for a higher quality image. But in reality, users do not pay much attention to the sound quality. “In economics, we call this a revealed preference: we discover what people prefer by watching what they do,” explains Marc Bourreau. But apart from a few purists, few users pay much attention to this aspect.

In fact, we must use algorithms to ascertain the value. The primary differentiation is found in the services offered by the platforms: recommendations, user-friendly designs, loading times… “To a large extent, the challenge is to help customers who are lost in the abundance of songs,” the economist explains. These algorithms allow listeners to find their way around the vast range of options.

And there is strong competition in this area due to the acquisition of start-ups and recruitment of talent in this field… Nothing is left to chance. In 2016, Spotify has already acquired Cord Project, which develops a messaging application for audio sharing, Soundwave, which creates social interactions based on musical discoveries, and Crowdalbum, which allows for the collection and sharing of photos taken during concerts.

Research in signal processing is also of great interest to streaming platforms, for analyzing audio files, understanding what makes a song unique, and finding songs with the same profile to recommend to users.

 

New relationships between stakeholders in the music industry

One thing is clear – sales of physical albums cannot compete with the constantly expanding range of the digital offering. Album sales are in constant decline. Performers and producers have had to adapt and find new agreements. Although artists’ income has historically been based less on album sales and more on proceeds from concerts, this was not the case for labels, which have now come to rely on events, and even merchandising. “The change in consumption has led to the appearance of what are called ‘360 ° deals’, in which a record company keeps all the revenue from their clients’ activities, and pays them a percentage,” explains Marc Bourreau.

 

Robbie Williams was one of the first music stars to sign a 360° deal in 2001, handing the revenue from his numerous business segments over to his record company. Credits: Matthias Muehlbradt.

 

Will these changes result in even less job security for artists? “Aside from superstars, you must realize that performers have a hard time making a living from their work,” the economist observes. He bases this view on a study carried out in 2014 with Adami,[2] which shows that with an equivalent level of qualification, an artist earns less than an executive counterpart — showing that music is not a particularly lucrative industry for performers.

Nonetheless, digital technology has not necessarily made things worse for artists. According to Marc Bourreau, “certain online tools now enable amateurs and professionals to become self-produced, by using digital studios, and using social networks to find mixers…” YouTube, Facebook and Twitter offer good opportunities for self-promotion as well. “Fan collectives that operate using social media groups also play a major role in music distribution, especially for lesser-known artists,” he adds.

In 2014, 55% of artists owned or used a home studio for self-production. This number is growing, since it was only 45% in 2008. Therefore, the industry’s digital transition has not only changed the means of music consumption, but also the production methods. Although things seem to be stabilizing in this area a well, it is hard to say whether or not these major digital transformations in the industry are behind us. “It is always hard to predict the future in economics,” Marc Bourreau admits with a smile, “I could tell you something, and then realize ten years from now that I was completely wrong.”  No problem, let’s meet again in ten years for a new review!

 

[1] The Innovation and Regulation Chair includes Télécom ParisTech, Polytechnique and Orange. Its work is focused on studying intangible services and on the dynamics of innovation in the area of information and communication sciences.

[2] Adami: A civil society for the administration of artists’ and performers’ rights. It collects and distributes fees relating to intellectual property rights for artistic works.

 

[divider style=”solid” top=”5″ bottom=”5″]

Crowdfunding – more than just a financial tool

As a symbol of the new digital technology-based economy, crowdfunding enables artists to fund their musical projects using the help of citizens. Yet crowdfunding platforms also have another advantage: they allow for market research. In this way, Jordana Viotto da Cruz, a PhD student at Télécom ParisTech and Paris 13 University, under the joint supervision of Marc Bourreau and François Moreau, has observed in her on-going thesis work that project sponsors used these tools to obtain information about potential album sales. Based on an econometric study, she showed that for projects that failed to meet the threshold for funding, greater pledges for funding had a positive effect on the likelihood of these albums being marketed on platforms such as Amazon in the following months.

[divider style=”solid” top=”5″ bottom=”5″]

Mai Nguyen, Artificial Intelligence, Children's curiosity

Artificial Intelligence: learning driven by children’s curiosity

At Télécom Bretagne, Mai Nguyen is drawing on the way children learn in order to develop a new form of artificial intelligence. She is hoping to develop robots capable of adapting to their environment by imitating the curiosity humans have at the start of their lives.

 

During the first years of life, humans develop at an extremely fast pace. “Between zero and six years of age, a child learns to talk, walk, draw and communicate…” explains Mai Nguyen, a researcher at Télécom Bretagne. This scientist is trying to better understand this rapid development, and reproduce it using algorithms. From this position at the interface between cognitive science and programming, Mai Nguyen is seeking to give robots a new type of artificial intelligence.

During the earliest stages of development, children do not go to school, and their parents are not constantly sharing their knowledge with them. According to the researcher, while learning does occur sporadically in an instructive manner — through the vertical flow of information from educator to learner — it primarily takes place as children explore their environment, driven by their own curiosity. “The work of psychologists has shown that children themselves choose the activities through which they increase their capacities the most, often through games, by trying a wide variety of things.”

 

A mix of all learning techniques

This method of acquiring skills could be seen as progressing through trial and error. “Trial and error situations usually involve a technique that is adopted in order to acquire a specific skill requested by a third party,” she explains. However, Mai Nguyen points out that learning through curiosity goes beyond this situation, enabling the acquisition of several skills, with the child learning without specific instructions.

In cognitive sciences and robotics, the technical name for this curiosity is intrinsic motivation. Mai Nguyen utilizes this motivation to program robots capable of independently deciding how they should acquire a set of skills. Thanks to the researcher’s algorithms, “the robot chooses what it must learn, and decides how to do it,” she explains. It will therefore be able to identify—on its own— an appropriate human or machine contact from whom it can seek advice.

Likewise, it will decide on its own if it should acquire a skill through trial and error, or if it would be better to learn from a knowledgeable human. “Learning by intrinsic motivation is in fact the catalyst for existing methods,” explains Mai Nguyen.

 

Robots better adapted to their environment

There are several advantages to copying a child’s progress in early development and applying the same knowledge and skill acquisition mechanisms to robots. “The idea is to program an object that is constantly learning by adapting to its environment,” explains the researcher. This approach is a departure from the conventional approach in which the robot leaves the factory in a completed state, with defined skills that will remain unchanged for its entire existence.

In Mai Nguyen’s view, this second approach has many limitations. Particularly the variability of the environment: “The robot can learn, with supervision, to recognize a table and chairs, but in a real home, these objects are constantly being moved and deteriorating… How can we ensure it will be able to identify them without making mistakes?” However, learning through intrinsic motivation enables the robot do adapt to an unknown situation, and customize its knowledge based on the environment.

The variability is not only related to space; it can also include time. A human user’s demands on the robot are not the same as they were ten years ago. There is no reason to believe they will be the same ten years from now. An adaptive robot therefore has a longer lifespan vis-à-vis human societal changes than a pre-defined object.

 

Mai Nguyen, AI, children

Learning through curiosity allows Mai Nguyen and her colleagues to develop robots capable of learning tasks in a hierarchical fashion.

Data sana in robot sano

It seems difficult for supervised automatic learning to compete with artificial intelligence led by intrinsic motivation. Mai Nguyen reports on recent experiments involving the replacement of faulty robots developed with automatic learning: “When an initial robot ceased to be operational, we took all of its data and transferred it to an exact copy.” This resulted in a second robot that did not work well either.

This phenomenon can be explained by the concept of the incarnation of robots developed using automatic learning. Each body is linked to data acquired during the conditioning procedure. This is a problem that “curious” robots do not face, since they acquire data in an intelligent manner, by collecting data that is customized to their body and environment, while at the same time limiting the data’s volume and acquisition time.

 

When will curious robots become a part of our daily lives?

Artificial intelligence that adapts to environments to this extent is especially promising in its potential for home care services and improving the quality of life. Mai Nguyen and her colleagues at Télécom Bretagne are working on many of these applications. Such robots could become precious helpers for the elderly and for disabled individuals. Their development is still relatively recent from a scientific and technological point of view. Although the first psychological theories on intrinsic motivation date back to the 1960s, their transposition to robots has only begun to take place over the past fifteen years.

The scientific community, which has been working on the issue, has already obtained conclusive results. By causing robots with artificial intelligence to interact, scientists observed that they were able to develop common languages. While these languages were based on different vocabulary during each trial, it always enabled them to converge on situations in which the artificial intelligences could communicate together, after starting from scratch. It’s a little like imagining humans from different cultures and languages ending up together on a desert island.

 

[box type=”shadow” align=”” class=”” width=””]

Artificial intelligence at Institut Mines-Télécom

The 8th Fondation Télécom brochure, published in June 2016, is dedicated to artificial intelligence (AI). It presents an overview of the research underway in this area throughout the world, and presents the vast amount of existing research underway at Institut Mines-Télécom schools. This 27-page brochure defines intelligence (rational, naturalistic, systematic, emotional, kinesthetic…), looks back at the history of AI, questions the emerging potential, and looks at how it can be used by humans.

[/box]