télédétection, remote sensing

Remote sensing explained: from agriculture to major disasters

There are numerous applications for remote sensing, from precision agriculture to helping with the deployment of aid in major disasters. Grégoire Mercier, a researcher at IMT Atlantique, explains the key notions of this remote observation method using examples from his research.

 

Detecting invisible objects from space… this is now possible through remote sensing. This remote observation method is currently used in monitoring air quality in urban areas, monitoring ecosystems, detecting illegal fishing etc. Its applications are as numerous as the territories that can be covered. Its aim: to spatialize information that would otherwise be located by measurements on the ground.

Over the past few years, Brittany has become a key player in certain remote sensing topics in France and Europe through GIS Bretel and the Vigisat observation platform, “We are going through an interesting period for remote sensing, because we are carrying out missions with an operational purpose”, explains Grégoire Mercier, a researcher at IMT Atlantique who specializes in remote sensing image processing. “We can respond to issues concerning civil society and not just show that we can observe clouds using images” he adds. With the help of our expert, we will provide an overview of the key elements of remote sensing, from methods to applications.

 

Retracing the history of signals in remote sensing

Grégoire Mercier defines remote sensing as “any remote observation for which electromagnetic radiation is used to study the interaction between waves and matter. Depending on the result, we obtain an understanding of the object which has interacted with the wave”.

Photons are a key element in both spatial and airborne remote sensing. Thousands of them travel through space at the speed of light until they reach the Earth’s atmosphere. At this point, things become more complicated due to the atmosphere itself, clouds and aerosols. The atmosphere is full of obstacles which may prevent photons reaching the Earth’s surface. For example, when a photon comes into contact with a particle or water droplet, it is partially reflected and/or absorbed and sends new waves out in random directions. If it successfully reaches the ground, what happens next depends on where it lands. Vegetation, oceans, lakes or buildings… the reflected radiation will differ according to the object struck.

Every element has its own spectral signature which later enables it to be identified on the remote sensing images obtained using an in-built sensor on a satellite, aircraft or drone.

 

Spectral response and remote observations

Every object has a unique signature. “When we observe chlorophyll, lots of things appear in green, we see absorption of red and, a step further on, we observe a very specific response in the near-infrared region”, explains Grégoire Mercier. Observation of these spectral responses indicates that the remotely observed zone is of a vegetal nature. However, these observations are adjusted according to the moisture and the presence of pathosystems (a bacterial disease in the plant). The latter modify the plant’s spectral “ID card”. This is how researchers detect hydric stress or infections before the effects become visible to the naked eye. The process is particularly useful in precision agriculture.

Airborne remote sensing provides information on practices and the evolution of landscapes. “At IMT Atlantique we worked in collaboration with the COSTEL laboratory on the characterization of wetland areas in agriculture. The aim was to create a tool for operational purposes. We were able to prove that the use of hedges helped prevent surface run-off and therefore the pollution of water courses.”

 

Active/passive remote sensing and wavelengths

There are two types of remote sensing depending on the type of sensor used. When we use the sun’s radiation for observation, we talk of passive remote sensing. In these cases, the sensors used are referred to as “optic”. The wavelengths in question (typically between 400 and 2,500 nanometers) allow lenses to be used. “The electromagnetic wave interacts with the molecular energy level on a nanometric scale, which enables us to observe the constituents directly,” explains Grégoire Mercier. This is how the composition of the Earth’s atmosphere can be observed, for example.

But observations are not purely limited to the visible field of the electromagnetic spectrum. The aim is to go beyond the human visual system with observations in the thermal infrared range (up to 5 mm in wavelength) and of microwaves (centimetric or decimetric wavelengths). “When we use wavelengths that are no longer nanometric, but centimetric, the wave/matter interaction with these electromagnetic waves is completely different”, explains Grégoire Mercier.

These interactions are characteristic of radar observations. This time, it is a question of active remote sensing because a wave is emitted toward the surface by the sensor before it receives the response. “For these wavelengths (from 1 centimeter to 1 meter), everything happens as though we were blind and touching the surface with a hand the size of the wavelength. If the surface is flat, we won’t see anything because we won’t feel anything. The texture of an element provides information.” In other words, radar observation of the sea’s surface reveals ripples corresponding to capillary waves. If we look at a lake, on the other hand, nothing can be seen. This helps scientists identify what they are observing.

 

Image processing and applications for large-scale disasters

Grégoire Mercier has made improving sensing methods part of his daily work. “My research is based on operational methods that aim to detect changes with a high level of reliability”, explains Grégoire Mercier. More particularly, the researcher addresses image analysis in time-related applications. He has collaborated with the CNES on the creation of tools used during major disasters.

Initiated in 2000 by the CNES and ESA, the international charter on “Space and natural disasters” currently unites 16 space agencies from all over the world. The charter can be activated following a major natural or industrial disaster depending on the level of severity. “There is always one space agency on watch. When the charter is activated, it must do everything possible to update the map of the affected area”, explains Grégoire Mercier. To obtain this post-disaster map, the space agency requisitions any available satellite, which it uses to map the zone before the deployment of civilian security. The objective must generally be achieved in three hours.

“Rapid mapping does not allow you to choose the most suitable sensor or the best perspective. The observation then has to be compared to the one corresponding to the previous situation, which can be found in databases. The problem is that the images will probably not have been produced by the same sensor and will not have the same spatial resolution, so the idea is to implement tools that will facilitate comparison of the two images and the management of heterogeneous data. That’s where we come in,” Grégoire Mercier continues.

Also see the video on I’MTech: Communicating in emergencies and natural disasters

Having exceeded the initial aims of the SPOT satellite (Satellite for Observation of the Earth), remote sensing has sufficiently proven its worth for it to become a vital tool for the observation of territories. The task is now to establish operational image processing methodologies, as proposed by the ANR PHOENIX project.

 

[divider style=”normal” top=”20″ bottom=”20″]

Remote sensing for monitoring the natural evolution of a landscape: the ANR PHOENIX project

Grégoire Mercier is involved in the ANR PHOENIX project. Launched in 2015, the project notably aims to establish reliable remote sensing methodologies which will be used in characterizing the natural evolution of landscapes. In this way, it will be possible to analyze large-scale structures such as alpine glaciers and the Amazonian rainforest at different periods to determine the impact of various types of changes on their evolution. The use of satellite data for monitoring the environment will allow analysis of its current state and forecasting of its future state. Find out more

[divider style=”normal” top=”20″ bottom=”20″]

 

Also read on I’MTech:

[one_half]

[/one_half][one_half_last]

[/one_half_last]

TeraLab, a big data platform with a European vision

TeraLab, an IMT platform aimed at accelerating big data projects by uniting researchers and companies, has held the “Silver i-Space” quality label since December 1st, 2016. This label, awarded by the Big Data Value Association, is a guarantee of the quality of the services the platform provides, both at the technical and legal levels. The label testifies to TeraLab’s relevance in the European big data innovation ecosystem and the platform’s ability to offer cutting-edge solutions. Anne-Sophie Taillandier, the platform’s director, tells us about the reasons for this success and TeraLab’s future projects.

 

What does the “Silver i-Space” label, awarded by the Big Data Value Association (BDVA) on December 1st, mean for you? 

Anne-Sophie Taillandier: This is an important award, because it is a Europe-wide reference. The BDVA is an authoritative body, because it ensures the smooth organization of the public-private partnership on big data established by the European Commission. This label therefore has an impact on our ability to work at the continental level. DG Connect, the branch of the Commission in charge of rolling out a digital single market in Europe, pays particular attention to this: for example, it prefers that H2020 research projects use pre-existing platforms. Therefore, this label provides better visibility within the international innovation ecosystem.

 

In addition to TeraLab, three other platforms have been awarded this label. Does this put you in competition with the other platforms at the European level?

AST: The i-Spaces are complementary, not competitive. With TeraLab, we insist on creating a neutral zone: it is a breath of fresh air in the life cycle of a project, so that people—researchers and companies— can test things peacefully. The complementarity that exists between the platforms enables us to combine our energies. For example, one of the other recipients of the Silver i-Space label in December was SDIL, which is based in Germany and is more focused on industrial solutions. The people who contact the SDIL have already made an industrial choice. The stakeholders who contact TeraLab have not yet made a choice, and want to explore the potential value of their data.

 

How do you explain this recognition by the BDVA?

AST: TeraLab has always sought to be cutting-edge. We offer a platform that is equipped with the latest tools. Since the fourth quarter of 2016, for example, the platform has been equipped with GPUs: processors initially designed for computing graphics, but that are now also used for deep learning applications. The computing optimization they provide make it possible to make the time required for learning the algorithms fifteen times faster. We can therefore make much more powerful machines available to researchers working in the area of deep learning. Until now, the infrastructure did not allow for this. Generally speaking, if we feel that a project needs specific equipment, we look at whether we can introduce it, and whether it makes sense to do that. This constant updating is also one of the tasks set as part of the Investments for the Future program (PIA) by the Directorate-General for Enterprise.

 

Is it primarily the platform’s technical excellence that has been recognized?

AST: That’s not all, TeraLab represents an infrastructure, but also an ecosystem and a set of services. We assist our partners so that they can make their data available. We also have quality requirements regarding legal aspects. For example, we are working towards obtaining authorization from the Shared Healthcare Information Systems Agency (ASIP) to allow us to store personal health data. From a technical perspective, we have all we need to store this data and work with it. But we need to meet the legal and administrative requirements in order to do this, such as meeting the ISO 27001 standard. We must therefore provide guarantees of the way we receive data from companies, establish contracts, etc.

 

Have the upstream aspects, prior to data processing, also been the focus of innovations?

 AST: Yes, because we must constantly ensure consent regarding the data. Our close relationship with IMT researchers is an asset in accomplishing this. Therefore, we have projects on data watermarking, cryptographic segmentation, and the blockchain. Data analysis and extracting value from this data can only be carried out once the entire upstream process is completed. The choice of tools for ensuring privacy issues is therefore essential, and we must constantly ensure that we remain cutting-edge in terms of security aspects as well.

 

How are these quality criteria then reflected in the business aspect?

AST: Today, TeraLab is involved in projects that are related to many different themes. We talked about the blockchain, but I could also mention the industry of the future, energy, tourism, health care, insurance, open source issues, interoperability… And, more importantly, since 2016, TeraLab has achieved financial equilibrium: the revenue from projects offsets the operating costs. This is far from trivial, since it means that the platform is sustainable. TeraLab will therefore continue to exist after 2018, the completion date for the PIA that initiated the platform.

 

What are TeraLab’s main objectives for the coming year?

AST: First of all, to strengthen this equilibrium, and continue to ensure TeraLab’s sustainability. To accomplish this, we are currently working on big data projects at the European level to respond to calls for proposals from the European Commission via the H2020 program. Then, once we have obtained the authorization from the ASIP for storing personal health data, we plan to launch health research projects, especially in the area of “P4” medicine: personalized, preventive, predictive, and participatory.

 

Cybersécurité, cyberattaques, Hervé Debar, Télécom SudParis, cybersecurity, cyberattacks

Cybersecurity: new challenges to face up to

Computer hacking and sensitive data theft is increasing. Companies are now highly connected and must adapt their security policy to strengthen the protection of their information assets. Hervé Debar, a researcher at Télécom SudParis and expert in cybersecurity, studies the different types of cyber attacks in order to optimize their detection.

 

The era when IT students developed viruses and then enjoyed their “success” based on the number of computers infected across the world is long gone… Today, the lure of money is the primary motivation for hackers and internet fraudsters and their targets include companies in the French CAC40 and Organizations of Vital Importance (OIV), i.e. those operating in sectors of vital national importance (transport, energy, telecoms etc.) SMEs and SMIs are also concerned by the rise in cyber attacks. They work as subcontractors for client institutions and are therefore targeted by hackers who want to obtain confidential files (R&D program, list of suppliers etc.) or directly infect the main client’s Information System (IS).

 

Highly connected companies

Two factors explain this evolution. The first is the increase in digital data managed and exchanged by companies. Information is at the core of their business activities. Secondly, organizations have become highly connected with fixed and mobile devices, peripheral devices (printers, cameras etc.) connected to networks, sensors in production lines, automated systems and the Internet of Things (IoT). “The result is that the economic value of connection is greater than the economic value of protection. We must therefore do the best we can to isolate what has to be isolated but still allow companies to benefit from a certain level of connectivity” Hervé Debar stresses.

In terms of computer safety, companies have to manage different problems including “the acceptability of security by users. If the recommended solutions are too complicated, they won’t use them and will find a way to get around them. Security must support use, not hinder it”, the expert from Télécom SudParis explains.

 

Complex regulations

To face up to this challenge, companies must be able to manage two major limitations. The first concerns the technical aspect. Services such as BYOD (Bring Your Own Device) or corporate Appstores (portals allowing employees to download selected applications to improve productivity) are being rolled out at a quicker rate than security measures.

The second limitation concerns the human dimension. Regulations on digital technology are very complicated, especially due to the Law on Military Programming and obligations imposed by the CNIL (French National Commission on Data Protection and Civil Liberties), and will become even more so in May 2018 with the introduction of the new European regulations on data protection. All companies will have to report personal data violation, in contrast to the law of 1978 currently in force which only concerns suppliers of communications services.

These legal constraints require companies to bring in experts who are able to administrate and roll-out IT security measures on a daily basis and inform staff members through training.

 

[box type=”shadow” align=”” class=”” width=””]

Attacks by computer zombies

DDoS (Distributed Denial of Service) attacks are a company’s worst nightmare. They use a network of thousands of computers or connected devices (often compromised) to interrupt the targeted service or services. There was a major attack of this kind in France between 18 and 21 September 2016, when the servers of OVH, a French web host, were inundated with millions of requests peaking at as high as one terabit per second. A squadron of around 150,000 IP cameras (or botnets) infected by cyber criminals were behind the attack. Hervé Debar has been studying this type of attack for a number of years along with 6cure – a Normandy-based start-up specializing in traffic cleansing – and in the framework of the NECOMA research project (Nippon-European Cyberdefense-Oriented Multilayer threat Analysis), one of the six FP7 projects financed by the European Commission under the Europe and Japan coordinated call. His team’s work consists in studying the possibilities offered by the mechanisms and functions of the network itself in detecting large-scale DDoS attacks which could saturate a single local defense system. The idea is to identify the attacking flow according to its provenance or technical characteristics in order to differentiate it from the legitimate flow, with the aim of restraining bad traffic to leave more room for “good” traffic.[/box]

 

Detecting cyber attacks

It is crucial to detect these attacks as early as possible in order to combat them. Identifying cyber attacks is one of IMT’s principal research topics. “We mainly work on the detection of and protection against distributed denial of service attacks (see insert) and those which more closely target environments on the network side”, explains Hervé Debar. The process has seen several developments since the first work carried out at the start of the 1980s by the American military. It was initially limited to rolling out a few probes. The few alerts raised were dealt with “manually”. Then the number of sensors increased and alerts became more frequent. To manage them efficiently, companies implemented SIEMs (Security Information and Event Management).

“Today, we need to automate part of the reaction so that operators can concentrate on the attacks that are more difficult to deal with. We are heading toward more autonomous and more reactive systems which can protect themselves against attacks. But it remains a complicated matter. Artificial intelligence (my PhD subject) is one possible way to improve our response to a certain number of threats”, explains Hervé Debar. The other option is training staff members to react better to attacks. We talk of “cyber range” which allows a realistic simulation of cyber attacks and the evaluation of cyberdefense tactics. These centers are designed to help OIV operators in particular make the right decisions in response to the impact.

The professionalization of cybercriminals and the increase in vulnerability, which concerns both IT networks and industry, require greater awareness among all actors, both public and private.

 

Hervé Debar, Cybersécurité, Télécom SudParis

A very involved practitioner

Hervé Debar is not a theorist, but a keenly involved expert with 25 years of experience in the world of cybersecurity and R&D. His work includes more than 100 scientific articles on the subject, the coordination of three European projects (WOMBAT, NECOMA and PANOPTESEC) and participation in a number of French and European programs on the subject. His long career in the private sector partially accounts for this considerable activity. An engineer by training, he obtained his PhD before joining Dassault AT (Automation and Telecoms) to work on European research projects. He then left for Switzerland to join IBM where he developed the IDMEF standard and a product called Tivoli Risk Manager, one of the first security information and event management products on the market. Upon returning to France, he joined Orange Labs in Caen and became an emeritus expert and leader of research activities on security. He then brought his considerable experience in the private sector to Télécom SudParis, where he has been in charge of the “Networks and Security” department since 2009.

 

FEMTO Engineering: a new component of the TSN Carnot institute

Belles histoires, bouton, CarnotIn July 2016 the Ministry for Education and Research renewed the Télécom & Société numérique Carnot institute accreditation and at the same time, the institute presented a new component: the FEMTO Engineering Center for Technological Development. We interviewed the Director of the FEMTO-ST laboratory, Laurent Larger, who told us a little more about the identity of this new component of the TSN Carnot institute and its domains of expertise which could be of interest for companies for their partnership research projects.

 

What is the difference between FEMTO-ST and FEMTO Engineering?

Laurent Larger: FEMTO Engineering was created in 2013 by FEMTO-ST and is associated with the FC’Innov foundation (Franche-Comté Innovation). FEMTO Engineering and the FEMTO-ST laboratory have a strategic proximity. In fact, the researchers in the two structures share the same building and their research overlaps. The experts at FEMTO Engineering can benefit from advice from those in the laboratory. FEMTO Engineering is devoted to the promotion and technology transfer of research developed at FEMTO-ST. The advantage of having a structure like FEMTO Engineering is that it constitutes a partner which is more attractive for the industry and SMEs.

 

What are FEMTO Engineering’s fields of work?

LL: In principle, all the research topics in which FEMTO-ST has expertise. FEMTO-ST has seven main departments: automation and micro-robotics; IT; energy; applied mechanics; micro-nano-science and systems; optics; time and frequency analysis (metrology of time, high-end radiofrequency and microwave sources). In practice, we have identified a certain number of topics which meet the requirements of a Carnot institute in terms of promotion and technology transfer or development.

Carnot TSN, FEMTO Engineering, FEMTO-ST

Laurent Larger, Director of FEMTO-ST institute

Let’s take energy for example. FEMTO-ST has internationally-acknowledged expertise in the field of fuel cell systems. FEMTO Engineering rolls out this expertise in the context of renewable energy using hydrogen. We are also currently working with a vehicle manufacturer on Stirling engines using the technique of heat recovery. FEMTO Engineering is also involved in the biomedical field. Work is currently being carried out on creating biochips allowing for the identification of living species. This allows us, for example, to propose new solutions for proteomic analysis on gold and silica chips using SPR (surface plasmon resonance).

 

What are FEMTO Engineering’s strengths?

LL: FEMTO-ST’s strengths relate to its technological expertise in high-tech components and systems. As an example, let’s take two technologies related to FEMTO-ST’s proprietary expertise.

Femtosecond laser machining is an activity based on expertise in the formation of a femtosecond laser beam. It allows for nano-machining with accurate control in terms of both time and space. In other words, it allows us to make nano holes in materials (glass, silicon, diamond etc.) with unprecedented speeds of execution and extremely high aspect ratios. This method can be used for ultra-fast cutting or for making nano holes to structure thin films. Last but not least, it can be used in surface texturing, which allows us to change the adhesive properties of a surface, to make it hydrophilic or hydrophobic, for example.

Another example of FEMTO Engineering’s strengths is its expertise in the field of time and frequency metrology. We have designed a cryogenic sapphire oscillator at the high end of state-of-the-art technology on an international level, and which has set new records for stability. These oscillators are used to obtain baseline frequencies for calibrating observation systems and have already been used by the ESA (European Space Agency). It also has applications in the field of radiofrequencies and microwaves.

More broadly speaking, our very close links with FEMTO-ST enable FEMTO Engineering to rely upon a vast range of experimental knowledge and high-tech facilities structured around platforms and notably including MIMENTO, our micro-nano-manufacturing plant (a member of the French CNRS RENATECH network).

 

Carnot TSN, FEMTO Engineering, FEMTO-ST

Manufacturing a resin mask. This mask will be used to create motifs on substrate materials. For example, for making metallic electrodes or for engraving. Photo credits: CC Ludovic Godard – UFC

 

What was the reason for wanting to join the TSN Carnot institute, what are your expectations and how will you benefit from it?

LL: We were already familiar with the Carnot institute environment because we belonged to it between 2006 and 2011. We wanted to return in order to benefit from new opportunities in terms of promotion and transfer. In this sense, the deciding factor with the TSN Carnot institute was its solidity. Out of all the Carnot institutes, it is very well positioned in terms of the level of activity. This standing appealed to us and we plan on contributing to it!

In terms of expertise, this is the most complementary Carnot institute in relation to our technology offering. Thanks to this complementarity, we can offer new opportunities in terms of promotion and industrial relations/partnerships. Another important thing to mention is our location. We are on the border with Switzerland, with which we have a lot of international interactions level, and in 2015 thus generated more than 40% of turnover from abroad. This trend was strengthened in 2016 with twice the amount of turnover.

 

Will FEMTO Engineering evolve after joining the TSN Carnot institute?

LL: For the moment, the workforce at FEMTO Engineering stands at 14 people, compared with 800 in the laboratory. This ratio provides significant room for growth, particularly in relation to the pool of expertise within FEMTO-ST. We benefit from this interaction with the laboratory, which offers us plenty of other topics that could be rolled out at FEMTO Engineering. The people with this expertise are mainly former PhD students and engineers from the laboratory. The concept is to offer members of industry the best possible interface to facilitate their innovation projects. Joining the TSN Carnot institute is an opportunity for us to grow and develop new activities with the business world.

 

[box type=”shadow” align=”” class=”” width=””]

The TSN Carnot institute, a guarantee of excellence in partnership-based research since 2006

 

Having first received the Carnot label in 2006, the Télécom & Société numérique Carnot institute is the first national “Information and Communication Science and Technology” Carnot institute. Home to over 2,000 researchers, it is focused on the technical, economic and social implications of the digital transition. In 2016, the Carnot label was renewed for the second consecutive time, demonstrating the quality of the innovations produced through the collaborations between researchers and companies.

The institute encompasses Télécom ParisTech, IMT Atlantique, Télécom SudParis, Télécom École de Management, Eurecom, Télécom Physique Strasbourg and Télécom Saint-Étienne, École Polytechnique (Lix and CMAP laboratories), Strate École de Design and Femto Engineering.[/box]

oispg, Pierre Simai, IMT

OISPG: Promoting open innovation in Europe

Projets européens H2020On January 1st, 2017, Pierre Simay was appointed as the new OISPG Rapporteur. This group of experts from the European Commission supports and promotes open innovation practices, particularly in the context of the Horizon 2020 program.

 

Today’s companies can no longer innovate alone. They exist in innovation ecosystems in which the collaborative model is prioritized,” explains Pierre Simay, Coordinator for International Relations at IMT. Open innovation is a way of viewing research and innovation strategy as being open to external contributions through collaboration with third parties.

The Horizon 2020 framework program pools all the European Union funding for research and innovation. The program receives funding of nearly €80 billion for a 7-year period (2014-2020). Each year, calls for tenders are published to finance research and innovation projects (individual and collaborative projects). The European Commission services in charge of Horizon 2020 have established external advisory groups to advise them in the preparation of the calls for proposals. Since 2010, IMT has been actively involved in the expert group on open innovation, OISPG – Open Innovation Strategy and Policy Group. Pierre Simay, the recently appointed OISPG Rapporteur, presents this group and the role played by IMT within the group.

 

What is the OISPG?

Pierre Simay: OISPG is a DG CONNECT expert group, the European Commission’s Directorate General for Information and Communication Technology. The open innovation phenomenon has increased over the past few years, with the appearance of more collaborative and open models. These models are based, for example, on user participation in research projects and the development of living labs in Europe (EnoLL network). I should also mention the new research and innovation ecosystems that have emerged around platforms and infrastructures. This is the case for the European “Fiware” initiative which, by making copyright-free software building block platforms available to developers and SMEs, seeks to facilitate the creation and roll-out of the internet applications of the future in what are referred to as the vertical markets (healthcare, energy, transportation, etc.).

Open innovation refers to several concepts and practices – joint laboratories, collaborative projects, crowdsourcing, user innovation, entrepreneurship, hackathons, technological innovation platforms, and Fablabs which are still relatively new and require increasingly cross-sectoral collaborative efforts. Take farms of the future, for example, with precision agriculture that requires cooperation between farms and companies in the ICT sector (robotics, drones, satellite imagery, sensors, big data…) for the deployment and integration of agronomic information systems. OISPG was created in response to these kinds of challenges.

Our mission focuses on two main areas. The first is to advise the major decision-makers of the European Commission on open innovation matters. The second is to encourage major private and public European stakeholders to adopt open innovation, particularly through the broad dissemination of the practical examples and best practices featured in the OISPG reports and publications. To accomplish its mission, OISPG has been organized around a panel of 20 European experts from the industry (INTEL, Atos Origin, CGI, Nokia, Mastercard…), the academic world (Amsterdam University of Applied Sciences, ESADE, IMT…), and the institutional sector (DG CONNECT, the European Committee of the Regions, Enoll, ELIG…).

 

What does your role within this group involve?

PS: My role is to promote the group’s work and maintain links with the European Commission experts who question us about the current issues related to the Horizon 2020 program and who seek an external perspective on open innovation and its practices. Examples include policy that is being established in the area of digital innovation hubs, and reflections on blockchain technology and the collaborative issues it involves. OISPG must also propose initiatives to improve the definition of collaborative approaches and the assessment criteria used by the Commission in financing Horizon 2020 projects. In Europe, we still suffer from cumbersome and rigid administrative procedures, which are not always compatible with the nature of innovation and its current demands: speed and flexibility.

My role also includes supporting DG CONNECT in organizing its annual conference on open innovation (OI 2.0). This year, it will be held from June 13 to 14 in Cluj-Napoca, Romania. During the conference, political decision-makers, professionals, theorists and practitioners will be able to exchange and work together on the role and impacts of open innovation.

 

What issues/opportunities exist for IMT as a member of this group?

PS: IMT is actively involved in open innovation, with major programs such as those of the Fondation Télécom (FIRST program), our Carnot institutes and our experimentation platforms (for example, the TeraLab for Big Data). Our participation in OISPG positions us at the heart of European collaborative innovation issues, enables us to meet with political decision-makers and numerous European research and innovation stakeholders to create partnerships and projects. This also allows us to promote our expertise internationally.

 

green it, epoc, cloud computing, data center

Data centers: Taking up the energy challenge

Increasingly present throughout the world, data centers consume significant amounts of energy. Researchers at IMT Atlantique have conducted a study that combines renewable energy and electricity grids to power these infrastructures. To reduce energy consumption, scientists are also looking at the network and the anticipation and organization of tasks on the various servers.

 

Accessing this website via a search engine consumes four times more CO2 than getting here by typing the website address or using a bookmark shortcut, according to the ADEME. [1] While it is easy to forget, because of its lack of physical presence, that digital technology emits greenhouse gases, it is important to remember that data centers emit as much carbon dioxide as civil aviation (2% of global emissions) due to their electricity consumption. This observation is far from trivial, considering the significant increase in the number of data centers around the world. Furthermore, with the arrival of cloud computing, data centers are consuming increasing amounts of energy, leading to ecological and economic problems.

With each new data center that opens, reducing the environmental impact is therefore a key consideration. With the aim of reducing energy consumption, the EPOC collaborative project (Energy Proportional and Opportunistic Computing systems) combines alternative energies and the EDF grid to power single-site infrastructures containing around fifty servers. This purely theoretical project, launched in 2013 and financed by the Labex CominLabs, brings together researchers in computer science and telecommunications from IMT Atlantique. For this Green IT project, the researchers’ skills in computer optimization, virtualization, and software adaptation are combined with expertise in very high-speed optical networks. Let’s take a look at the theory at the heart of this project, while awaiting its implementation.

 

Technical optimization

In the EPOC theory, data centers are connected to the EDF grid, while still being powered by renewable energy. This renewable energy provides electricity in a discontinuous manner, unlike the electricity provided by the traditional network. For example, in the case of a photovoltaic panel, there is a surge in electricity when the sun is at its peak, whereas there is no production at night. “Currently, the most expensive aspect in a renewable energy electric system is the batteries used to store the energy that is not consumed. We would like to do away with the storage aspect and try to consume the electricity that is produced directly,” explains Jean-Marc Menaud, a researcher specialized in Green IT at IMT Atlantique and the EPOC coordinator. This is yet another reason to optimize the cost-effectiveness of this heterogeneous distribution of energy, and power the data centers in a consistent manner over time.

To achieve these goals, the researchers are improving network communication in particular. They have opted for an installation that is entirely connected via fiber optic technology. This reduces energy consumption during transfers between the servers. The transmission of information via a fiber optic system or, in other words, via light signals, consumes less electricity than an ADSL connection. ADSL, for example, uses cables containing copper and equipment that constantly consumes electricity.

 

Organizing tasks according to the electrical load

Once they are running and connected to a network, data centers are able to host two types of applications. Some, like search engines, require a constant online connection. Others must be performed before a deadline. The researchers are therefore able to coordinate the applications being used based on the energy received. When green energy is available, they can organize the tasks the way they like. Jean-Marc Menaud gives us an example: “at the end of every month, the accounting services must create the pay stubs in PDF format. These files must be available by the 30th of each month, but can be produced any time before this date. They can therefore be created when a high level of green electricity is available.

In addition to the number of tasks in progress, it is also possible to play on the applications that are always online. The goal of a data center is to constantly ensure a certain level of quality of service, or SLA (Service Level Agreement). This is a question of software elasticity, meaning the ability to adapt an application’s operation according to the available energy. Take, for example, a website used to evaluate a trip from one point to another. Its SLA consists in providing a route within a given time frame. If the electrical load is low, it will only meet these simple requirements. If, however, the green electricity rate is high, the website can provide alternative routes, hence improving the service provided.

 

Regulating the workload

Reducing energy consumption also involves reducing the number of servers that are running. Before turning a server off, the applications running on it must be transferred to another server. To do this, the researchers use the virtual machine principle. The researchers have two possibilities for relieving the workload on a server: they either suspend the calculation, or they perform a migration using virtual machines. This process of scheduling tasks on the servers is a complex problem. It is, above all, a question of placement and distribution.

Jean-Marc Menaud explains: “This principle of placement is similar to the principle involved in filling a backpack. Imagine you are leaving for a trek with a 60-liter backpack. You can choose from a wide variety of food items to take with you. Each food item has a calorific value, a volume and a weight. Your goal is to gather a maximum amount of calories within the limitations of a backpack with a set volume, while minimizing the final weight. The solution is easy when there are only 5 food items. But if you have 10,000 items, the problem becomes more complex, because it is impossible to test all the possibilities. Here we have a similar situation. A server is a backpack that can contain a certain quantity of virtual machines. We must maximize the service provided (the calories) and minimize the energy (the weight).

 

SeDuCe: a full-scale test data center

The last area the EPOC researchers are studying is anticipation. Predicting an application’s energy needs, combined with the electricity production forecast, is the key to responsible consumption. This is an aspect that will be directly tested in 2017, with the implementation of the CPER [2] SeDuCe (Sustainable Data Center) project, the logical follow-up to three years of theory. It will enable a test single-site data center to be established, powered by photovoltaic panels. This complete infrastructure will enable the practical analysis of the theories being developed through EPOC. “This type of site is rare in France. They are only beginning to emerge at the industrial level. With photovoltaic panels that are increasingly affordable, we will be able to test the hypotheses of this advanced research directly. The site should be operational by the summer of 2017″, Jean-Marc Menaud concludes.

 

[1] ADEME: The French Environment and Energy Management Agency (Agence De l’Environnement et de la Maîtrise de l’Energie)

[2] CPER: French State-Regional Plan Contract (Contrat Plan Etat-Région)

 

Piezoelectric, Cédric Samuel, IMT Lille Douai

Connected devices enter the piezoelectric generation

Piezoelectric polymers may be key in the manufacturing of future generations of connected devices. Thanks to their capacity to generate electricity under mechanical stress, they could greatly reduce the energy consumption of intelligent sensors and make them autonomous. But in order to achieve this, researchers must be able to make these polymers compatible with classic production techniques in the plastics processing industry. All the possibilities are being considered, from 3D printing to bio-based materials.

 

Certain materials are able to generate an electric charge on their surface when deformed. This property, called piezoelectricity, has proven to be of particular interest in the field of connected devices. “Wearables”, which are intelligent devices worn on the body such as exercise trackers, are regularly subject to torsion, flexion and compression. By using piezoelectric materials in their design, there will be less need for batteries, reducing charging frequency and consequently increasing their autonomy. Piezoelectricity, although limited to ceramics since its discovery at the start of the 19th century, is gaining ground in the sector of polymers thanks to growing demand for flexible or transparent connected devices.

Nevertheless, these new plastics “will not compete with ceramics in their fields of application” warns Cédric Samuel, Materials Researcher at IMT Lille Douai (formerly Mines Douai and Télécom Lille). The coefficients that quantify the electricity produced by the piezoelectric effect are 15 times smaller for polymers than for ceramics: “30 picocoulombs per newton for the most efficient polymers, compared with 500 for ceramics” the scientist explained. But connected devices do not need high coefficients, since they only consume a small amount of energy. On the other hand, they require materials that are inexpensive to manufacture, a specification that would be met by piezoelectric polymers if researchers could make them compatible with classic production techniques in the plastics processing industry.

The researchers’ challenge – and it is a considerable one – lies in the processing and shaping of such materials. PVDF, which is currently the most efficient piezoelectric polymer, is far from easy to process. “Only a single type of PVDF crystal — the beta form — has piezoelectric properties,” Cédric Samuel explains. To obtain this form, PVDF must be deformed by more than 200% by stretching, at temperatures between 90 and 100°C. “This requires numerous processing and post-processing stages, which complicates the process and increases production cost” the researcher continued. Alternative options must be found in order to obtain a large-scale and inexpensive processing and shaping solution for piezoelectric PVDF.

PVDF crystals, a piezoelectric polymer with high potential

Researchers are exploring various other possibilities. Working with the University of Mons (Belgium) through a co-supervised PHD thesis, IMT Lille Douai is concentrating more particularly on polymer blends combining PVDF with another plastic: PMMA. This provides two advantages. Not only is PMMA less expensive than PVDF, but the combination allows a piezoelectric form of PVDF to be obtained directly through extrusion. Scientists thereby skip several stages of processing. “The downside is that it leads to a lower piezoelectric coefficient,” Cédric Samuel points out, before adding, “but then again, applications for piezoelectric polymers do not necessarily need huge coefficients.”

 

Piezoelectric polymers through 3D printing

Although polymer blends are an option worth studying to improve processing of piezoelectric PVDF, they are not the only possible solution. Through the Piezofab project, which involves the two Carnot institutes of the IMT (M.I.N.E.S Carnot institute and Télécom & Société numérique Carnot institute) alongside IMT Atlantique (formerly Mines Nantes and Télécom Bretagne) and IMT Lille Douai, researchers are aiming to create sensors and electric generators from piezoelectric polymers through 3D printing. “We seriously believe we can succeed, because we have sufficient background on polymer-based additive manufacturing thanks notably to the expertise of Jérémie Soulestin on the subject,” declares Cédric Samuel confidently.

Researchers at IMT Lille-Douai will endeavor to test the feasibility of the process. To do so, they will work on a modified form of PVDF supplied by their partner PiezoTech, a company which is part of the Arkema chemicals group. This PVDF has the specificity of directly crystalizing in the piezoelectric from when manufactured using 3D printing. Although the cost of the modified polymer is greater than that of its standard form, the manufacturing process could allow a serious reduction of the quantities used.

This inter-Carnot project will lead researchers to study the relevance of piezoelectric polymers for connected devices. IMT Atlantique’s task will be to incorporate piezoelectric polymers into radio transmitters and characterize their properties during use. “One of their greatest strengths is the integration of systems for specific applications, such as monitoring individual exercise” the researcher explained, referring to work carried out by Christian Person.

 

Piezoelectric materials can also be bio-based!

In the two previously-mentioned options currently being studied by Cédric Samuel and his colleagues, the common factor is PVDF. However, “PVDF is an engineering polymer, which remains expensive compared to the commodity polymers” he underlines, “ideally, we would like to be able to use the commodity polymers of plastics processes, and preferably bio-based if possible” he continued. To achieve this, IMT Lille Douai is directing a cross-border European project called Bioharv which associates academic partners in France and Belgium. The Universities of Mons, Lille and Valenciennes as well as Centexbel, a scientific center specialized in the textiles industry, are working alongside the graduate school.

 

Making prototypes using piezoelectric textile fibers.

 

The researchers are most interested in two bio-based polymers, or bioplastics: Polyamide 11 and Polylactic Acid (PLA). The first has proven piezoelectric properties, although a lot weaker than those of PVDF. For the latter, it is a question of proving whether it can in fact generate electric charges. “Certain scientific articles lead us to suppose that Polylactic acid is a promising option, but there has not yet been a clear demonstration of its piezoelectricity” Cédric Samuel explained. In order to do so, the scientists must obtain PLA in its semi-crystalline form. “It’s a stumbling block, as PLA is currently not easy to crystallize” the researcher went on.

The Bioharv project is organized in several stages, gradually developing increasingly effective generations of piezoelectric polymers. It reflects a dual regional research dynamic focusing on both new textiles and the use of natural resources for designing the materials of tomorrow. The stakes are high because the petrochemical industry will not always be able to meet an increasing demand for polymers. Since PLA is produced using agricultural resources, connected devices in the future may be able to be made using corn or potatoes, rather than oil.

 

Radioactivity, SMART, IMT Atlantique

SMART: radioactive expertise at the service of the environment

Nuclear power stations are not the only sources of radioactivity in France. Other sources undeniably include hospitals and the construction industry, for example. Given the potential dangers to health and the environment, such sites have to be permanently tested. SMART, a team of experts in radioactivity from SUBATECH is a necessary third-party actor in this domain. The team brings objectivity and quality to field measurements and laboratory analyses. It handles regulations on radioactivity and thereby contributes to monitoring the environment.

 

SMART. No, this isn’t an article about the latest model of the famous subcompact car, but the Service for the Measurement and Analysis of Radioactivity and Trace elements. This team of experts (technicians and engineers) which is part of SUBATECH,[1] is not restricted by the usual laboratory and research standards. This freedom gives them a broad scope for responding to very specific questions. “We can answer any question that concerns the measurement of radioactivity,” explains Isabelle Deniau, Senior Technical Engineer at SMART. So, if you want to determine the true nature of a diamond, or find out the origin of a wooden floor, SMART’s doors are wide open and its advice is readily available.

But the range of offerings proposed by these radioactivity experts goes further than answering such unusual questions. Any organization using radioactive sources, such as Engie, AREVA, the CEA, hospitals or the construction industry, can benefit from SMART’s services. These direct or indirect actors in the industry of nuclear energy are subject to strict regulations in terms of tests and waste in order to protect citizens’ health and the environment.

 

Radioactivity tests for monitoring the environment

When we talk of radioactivity, the image of a nuclear power plant often enters our minds. This is hardly surprising in France, given that nuclear power is the leading source of energy, accounting for 77% of all power generation (according to the Electricity Transmission Network – RTE). Nuclear power stations are tested regularly because of the dangers they can lead to. The SMART laboratory works directly on the ground to analyze nuclear waste and put in place impact measurements, notably for the ASN (Nuclear Safety Agency). “Broadly speaking, the laboratory contributes to monitoring the environment in France”, explains Isabelle Deniau.

“We conduct analyses of grass, moss, leaves… anything that may absorb radioactive compounds through exchange with the atmosphere. We also take water samples”, the engineer added. These samples provide information on potential pollution from radioactive isotopes released into the atmosphere in the form of waste liquids or gases. Specialists call them bioindicators. These biological compounds indicate the presence or absence of contamination.

Samples are never taken at random on the ground. Analyses are conducted upstream and downstream of nuclear power plants and then compared to evaluate the impact of the effluent releases. The choice of samples depends on the radioactive isotopes being searched for. The experts must take account of the accumulation capacity of each bioindicator. “Since we entered the era of nuclear power it has become difficult to differentiate between natural radioactive isotopes and those connected with human activity, because some of these compounds are already present in the environment”, Isabelle Deniau explains.

Up-to-date knowledge of the behavior of radioactive elements in the areas studied is vital, since the compounds differ depending on the environment in which they have accumulated. Cesium, for example, is a typical compound of nuclear activity which is deposited among sediments in watercourses. It can be transmitted to algae and then consumed by fish and eventually accumulates in their muscles. “In these cases, it is no longer the physical half-life which interests us, but the biological half-life”.

 

Secure and monitored demolition

These measurements are effective for nuclear power stations in operation, but such sites only have a limited life-span. They have been designed to last for between 25 and 40 years, although this period has been extended over the past few years thanks to advances in knowledge and techniques which ensure the safety of these infrastructures. But a time will come when these power stations will have to be demolished. Measurements are taken by the team at SMART. “Regarding the problem of demolition, our aim might be to determine the level of radioactivity of demolition waste which will later be sent to the ANDRA (National Agency for the Treatment of Radioactive Waste), or to check that no radioactivity remains on the site”, says Isabelle Deniau.

The rehabilitation approach for each of these potentially dangerous sites is taken very seriously at every stage of the process. Before demolition, the experts evaluate the radioactivity of the waste which will be produced. After the demolition, they measure the level of residual radioactivity. “We can also act as third-party experts on demolition sites such as Fort de Vaujours. This constitutes an additional control”.

 

Reputed and award-winning practices

“We are accredited for samples and analyses by COFRAC, an organization that accredits laboratories and certifies their measures. This provides a guarantee that the results follow a standardized procedure and that the samples and measurement techniques have been validated. This means that if we repeat the measurement, our result will always be the same. We undergo regular audits for this certification”, explains Isabelle Deniau.

Samples for radioactivity analyses are standardized by vital measurement constraints. In the case of water samples, for example, scientists must take samples from the middle of the watercourse and not from zones where the water is stagnant. Certain precautions must also be taken, such as rinsing the vials with sample water, or filling them right to the brim to prevent gas exchange. These precautions are all necessary for reliable and representative results.

“COFRAC certification is reassuring for our clients. It certifies the reliability and quality of our work”, Isabelle Deniau adds. It is important to have an independent actor like SMART. The fact that it is not associated with the organizations producing radioactivity means it can carry out safety measurements without bias. The service adheres to strict regulations that guarantee objective results, and therefore plays an essential role on a societal level.

 

[1] SUBATECH is a research laboratory co-operated by IMT Atlantique, the Institut National de Physique Nucléaire et de Physique des Particules (IN2P3) of CNRS, and the Université de Nantes.

Entrepreneur, Thomas Houy, Télécom ParisTech

What is the truth behind the myth of the entrepreneur?

A much more complex and less glamorous truth lies behind the storytelling of the visionary entrepreneur, the insider who has the revolutionary idea of the century. Thomas Houy, Researcher at Télécom ParisTech, breaks down the stereotypes surrounding the false image of the exceptional strategist. Following his presentation during the IMT symposium on businesses, the economy and society in November 2016, he returns to the subject of the truth of an entrepreneur who is above all closely attentive to demand and market data.

 

Even after his death, Steve Jobs embodies the figure of the entrepreneur. Often presented — or represented — as an innovative genius, he symbolizes the power of visionary man, the fortuitous person able to provide the solutions to the problems of a world that only he can understand. Far from being unique to the former CEO of Apple, this rose-tinted perspective seems to concern all successful entrepreneurs, who supposedly build long-term strategies on ideas that are just as sudden as they are disruptive. The only problem with this belief is that it is wrong.

Thomas Houy, Researcher and Head of the Student Entrepreneurship in a Digital World Chair at Télécom ParisTech, studies the characteristics of contemporary entrepreneurship, which is a driver of digital transition. He believes that multiple clichés need to be broken down, and first and foremost, the one which consists in believing that it is question of being visionary, that a person almost needs to have the gift of ubiquity to be a successful entrepreneur. While he does not consider that success comes down to chance, he simply points out that “entrepreneurship is almost completely obscure. Nobody can really predict its success. In the face of so many uncertainties, the most practical solution is to iterate as much as possible with the market.”

This ability to iterate has improved thanks to digital technology. 3D printing, for example, allows a product to be prototyped very quickly and tested among users, before returning just as quickly to the prototype phase if the solution is not suitable. Even better, data collected from users of online services provides a gage of the success or failure of a platform. In short, “it is now much easier to test a market and understand users” Thomas Houy observes.

It is more this capacity for rapid transfer between consumers and enterprises that explains success — which is built on the basis of repeated failures — rather than a long-term strategy or plan. This is one of the advantages of start-ups over big groups in digital technology. “Big businesses function with business plans and projections over several years; they are less able to let themselves be guided by a sort of serendipity, in which the organization accepts the fact it does not know where it will end up” the Researcher explains. Start-ups, on the other hand, are able do this thanks to their short history and close relationship with their first clients.

 

Entrepreneur, Thomas Houy, Télécom ParisTech

In May 2015, Bloomberg illustrated an article about Elon Musk and his SpaceX business using this visual from The Red Dress. A symbol of the mythologization of an entrepreneur raised to a position of pioneer in the conquest of Mars. Credits: Bloomberg / The Red Dress.

 

“Start-ups focus their efforts on finding what is known as the “product market fit”, which is the stage where the developed solution meets the demand of the market. Start-ups can even use tricks such as building false products to test the market’s liking for their solution,” Thomas Houy explains. He uses the example of Dropbox to illustrate this point. This start-up, which offers a system of storing and sharing files via the cloud, found its users before it even had a marketable product. “The founder of Dropbox created a false video over the space of a weekend to present his product, without mentioning that it didn’t exist yet,” the Researcher told us, “he was immediately inundated with emails. He had found his market without even having a product.”

This fast roll-out of solutions in the new digital economy makes business plans obsolete. The acceleration of the rate of digital transformation also prevents any form of prediction over the medium or long term. When Tinder entered the dating applications market, all the existing actors were wiped off the board within the space of a few months, nullifying any 5-year projections. “Business models have replaced business plans in the digital economy today,” Thomas Houy adds, “because nobody can predict what will happen in 6 months’ time.”

 

Being the first is not enough to be successful

Innovations can break through quickly, but this is no reason to believe that an innovation is an idea that strikes like lightening in the mind of an exceptional being. In fact “all ideas already exist” the Researcher believes, “if an idea doesn’t already exist, it’s probably because it’s a bad one. This is also what investors say” he continues. Twitter was in no way the first micro-blogging website. It even arrived ten years after the concept was first invented. Having the idea first is not a sign of success. What counts is the quality of execution. Twitter’s strength lies in the fact it pays constant attention to the user’s perception and adapts to its uses.

Does that mean a good entrepreneur is a good copier, able to take inspiration from others’ successes to develop a better solution? “People who copy successes often copy the wrong things” Thomas Houy warns. The creator of a social media may be tempted to copy Facebook, and will try to isolate what characterizes Facebook, such as its agile methodology based on frequent updates. “The problem is that among all the people who have used agile methodologies, thousands have got it wrong, so it’s not that which is behind the success of Facebook.” the Researcher explained. “Good entrepreneurs take inspiration from others’ mistakes just as much as their successes,” he added

It is partly this need to take a step back that means that good businesses in a given field are not necessarily set up by those already in the inner circles. “Statistically speaking, entrepreneurs with a knowledge of the field are more successful, but this is not the case among disruptive enterprises” the Researcher observed. This paradox is due to the fact that once the mentality of a sector or market has been adopted, it is difficult to break away and change perspective. He illustrated this fact with an example he is very familiar with: teaching. “When teaching staff create MOOCs, they generally do so from an academic point of view,” he explained, “but we are much less efficient at keeping the attention of the audience behind their screen than someone from the audio-visual or media sectors and who is familiar with attention economics.”

The reality of entrepreneurship is in fact often a long way off the widespread vision of it. The storytelling used with regards to this concept does not help paint an accurate portrait of an entrepreneur. Unfortunately, “it is a lot simpler and more attractive to describe how an entrepreneur has the gift of ubiquity, rather than to say that they are a humble person who does not forecast on the strength of their ideas but who constantly observes data for guidance,” comments Thomas Houy.

 

[box type=”shadow” align=”” class=”” width=””]

Transmitting good entrepreneurial practices to students

One of the common beliefs about the qualities of an effective entrepreneur is that you have to scale-up an innovation quickly. Thomas Houy is regularly confronted with this erroneous idea in the projects he leads with his students through the Student Entrepreneurship in a Digital World Chair, financed by LVMH. “My students often want to solve the problem of boredom in public transport,” he explained, “several times, they have come up with the idea of an ephemeral social network allowing travelers to contact each other on the bus or metro. They all have the same thought: ‘it’s a really good idea because 10 million people use the metro every year in France.’ Except that there’s no such thing as ‘the people on the metro’, and there are just as many different needs as there are individuals. In the morning, there may be a student on the metro going home from a night out, next to an executive practicing his project presentation in his head. Trying to scale-up quickly means trying to satisfy everybody, and trying to satisfy everybody means satisfying nobody. Good entrepreneurs target niches, they learn to perfectly satisfy their niche and only scale-up by addressing new categories one at a time.”[/box]

Algo, Turbocodes, Claude Berrou, Turbocodes, IMT Atlantique

On computer science : Turbo in the algo

Serge AbiteboulEcole Normale Supérieure Paris-Saclay and Christine FroidevauxUniversité Paris Sud – Université Paris-Saclay

A new “Interview on Computer Science”. Serge Abiteboul and Christine Froidevaux interview Claude Berrou, computer engineer and electronics engineer, and a member of the French Academy of Sciences. Claude Berrou is a professor at IMT Atlantique. He is best known for his work on turbo codes, which has been used extensively in mobile telephony. His current research focus is on informational neuroscience. This article is published in collaboration with the blog Binaire.

 

Claude Berrou, Informatique, IMT Atlantique, Turbocodes

Claude Berrou. Binaire. Author provided

Binaire: You started out as an electronics engineer, how did you get into computer science?

Claude Berrou: I am a science rambler. After my initial training at a graduate school that today is called Phelma, I studied a little bit of everything: electronics, signal processing, circuit architecture. Then I got into computer science… by chance, through correction codes and information theory.

Here’s a question we love to ask here at the Binaire blog, what is your definition of computer science?

CB: I have an aphorism: computer science is to the sciences what natural language is to intelligence. Before computer science, there were equations, formulas and theorems. Computer science allowed sequences of operations, processes, and procedures to be developed to process complex problems. This makes it almost synonymous with language, and it is very similar to natural language, which also requires structure. Just like when we have a common language, computer science offers languages that everyone can understand.

You worked with correction codes. Can you tell us what they are used for?

CB: When we transmit information, we want to retrieve the full message that was sent. Even if we have a lot of users and limited bandwidth. If the message is binary, due to noise and interference disturbing the line, some of the transmitted 0s will be received as 1s, and some of the 1s will become 0s. The greater the noise compared to the signal, the more frequent these kinds of errors happen. The signal-to-noise ratio can be decreased by poor weather conditions, for example, or disturbances caused by other communication taking place at the same time. With all these errors, the quality becomes very poor. To prevent this, we encode the transmitted information by adding redundancy. The challenge is to be able to retrieve the message relatively well without adding too much redundancy, without making the message too big. We have a similar problem in mass storage. Bits can switch, sometimes due to wear to the disk. We also introduce redundancy into these systems to be able to retrieve the information.

Talk to us about your wonderful invention, turbo codes.

CB: Turbo codes were born thanks to the Titanic, when we needed to achieve the transmission for viewing the wreck (work by Alain Glavieux). I played around with ways of reducing the effect of the noise in the transmission, and to deal with the errors, and I thought of introducing the principle of negative feedback in the decoding process, a classic concept in electronics.

For me, the interdisciplinary aspect is fundamental; innovation is often found at the interface of different disciplines. You take an idea that has been proven to work in one area of science, and you try to adapt it to an entirely different context. The original idea behind the turbo codes was to import an electronics technique into computer science.

When we want to create a high-gain amplifier, we put in 2 or 3 of them in a series. But this creates instable behaviour. To stabilize the arrangement, we implement a negative feedback principle: send a fraction of the amplifier’s output back to its input with the “–” sign; this reduces unwanted variations.

I started with a known algorithm: the Viterbi algorithm. It makes it possible to correct (if there is not too much noise) the errors that occur during transmission through a noisy channel, and can therefore be considered to be a signal-to-noise ratio amplifier. The Viterbi decoder exploits the algebraic law used to design the redundancy of the encoded message and uses it by means of a trellis (the deterministic equivalent of a Markov chain), thereby delivering the most probable original message. Therefore, I put two Viterbi algorithms in a series. I then tried to integrate the negative feedback concept into the decoding process. It’s a difficult task, and I was not a coding expert.

One problem was that the Viterbi algorithm makes binary choices: the bit was either switched, or it wasn’t. Along with a colleague, Patrick Adde we adapted it so that it would produce probabilistic decisions, which significantly improves the subsequent performance of the decoder.

How does it work?

CB: Like I mentioned, to protect a message, we add redundancy. The turbo code performs the coding in two dimensions. A good analogy is the grid of a crossword puzzle, with vertical and horizontal dimensions. If the definitions were perfect, only one dimension would be enough. We could rebuild the grid, for example, with only horizontal definitions. But since we do not always know what the definitions refer to, and since there can be ambiguities (due to noise, deletions, etc.), we also provide vertical definitions.

The decoding process is a little like what someone does when working on a crossword puzzle. The decoder works in a line (it uses the horizontal definitions), and then moves onto the vertical dimension. Like the crossword fan, the decoder requires several passages to reconstruct the message.

With all of these aspects, the turbo codes are effective.

We believe you. Billions of objects use this technology!

CB: Yes. All media data on 3G and 4G are protected by turbo codes.

Shannon, Claude Berrou

Claude Shannon. Binaire/Wikipédia. Author provided

This brings us to another Claude: Claude Shannon and the information theory?

CB: Yes, with this algorithm we clearly enter the realm of the information theory. In fact, I recently helped organize the symposium at IHP celebrating the centenary of Claude Shannon’s birth, which was a fascinating symposium.

Shannon demonstrated that all ideal transmission (or storage) should be accomplished using two fundamental operations. First, to reduce the message size, it is compressed to remove the maximum amount of unnecessary redundancy. Next, to protect against errors, intelligent redundancy is added.

Shannon demonstrated the limits of correction codes in 1948! Turbo codes reach Shannon’s theoretical limit, to within a few tenths of a decibel!

And now. You have moved on to neuroscience…

CB: My current research is related to informational neuroscience. You recently interviewed Olivier Faugeras, who talked to you about computational neuroscience, a fairly different approach.

Cortex. Nicolas Rougier. Author provided

My starting point is still information, but this time in the brain. The human cerebral cortex can be compared to a graph, with billions of nodes and thousands of billions of edges. There are specific modules, and between the modules are lines of communication. I am convinced that the mental information, carried by the cortex, is binary.

Conventional theories hypothesize that information is stored by the synaptic weights, the weights on the edges of the graph. I propose a different hypothesis. In my opinion, there is too much noise in the brain; it is too fragile, inconsistent, and unstable; pieces of information cannot be carried by weights, but rather by assemblie of nodes. These nodes form a clique, in the geometric sense of the word, meaning they are all connected two by two. This becomes digital information.

Is this where we will see coding and redundancy? To prevent information from getting lost in the brain, do redundancies also exist?

CB: Yes. For the traditional, analog school of thought, information is carried by the synapses. In this case, redundancy could only be achieved using repetitions: several edges would carry the same information.

According to our approach, information is encoded in the connections of a grouping of nodes. Redundancy is naturally present in this type of coding. Take a clique made up of 10 nodes on a graph. You have 45 connections in the clique. This is a large number of connections compared to the number of nodes. I base this on the Hebbian theory (1949): when neuron A sends spikes and neuron B activates systematically, the connection between A and B will be reinforced if it exists, and if it doesn’t exist it will form. Because the clique is redundant, it will resonate, and a modified connection will be reinforced: using Hebbian theory we obtain a reconstruction in the event of deterioration. We have established an entire theory based on this.

You lost us. A clique carries a piece of information. And the fact that the clique features so much redundancy ensures the information will be lasting?

CB: Yes. And furthermore, the clique can be the building block for an associative memory. I will be able to find the complete information based on certain content values. And this is due to the cliques’ highly redundant structure.

What does your work involve?

CB: I have set up a multidisciplinary team made up of neuropsychologists, neurolinguists, computer scientists, etc. We are trying to design a demonstrator, a machine based on the model of the brain as we see it, on an informational level. In a traditional computer, the memory is on one side and the processor on the other. In our machine, and in the brain, everything is interlinked.

Based on the theory we are developing (not yet fully published), mental information relies on little pieces of knowledge that are stored in the cliques. The cliques are chosen randomly. But once it has been done, they become permanent. This varies from one person to another; the same cliques do not carry the same information in different individuals. I would like to develop artificial intelligence using this machine model.

How do you see artificial intelligence?

CB: There are, in fact, two types of artificial intelligence. First, there is the kind concerned with the senses, with vision and speech recognition, for example. We are starting to be able to do this using deep learning. And then, there is the type that allows us to imagine and create, and know how to answer new questions. For now, we are not able to do this. In my opinion, the only way to make progress in this strong AI is to base it on the human cerebral cortex.

I am passionate about this subject. I would like to see it advance and continue my research for a long time to come.

 

Serge Abiteboul, Research Director at INRIA, member of the French Academy of Sciences, Affiliate Professor, Ecole Normale Supérieure Paris-Saclay and Christine Froidevaux, Computer Science Professor, Université Paris Sud – Université Paris-Saclay

The original version of this article was published in French on The Conversation France.