Algo, Turbocodes, Claude Berrou, Turbocodes, IMT Atlantique

On computer science : Turbo in the algo

Serge AbiteboulEcole Normale Supérieure Paris-Saclay and Christine FroidevauxUniversité Paris Sud – Université Paris-Saclay

A new “Interview on Computer Science”. Serge Abiteboul and Christine Froidevaux interview Claude Berrou, computer engineer and electronics engineer, and a member of the French Academy of Sciences. Claude Berrou is a professor at IMT Atlantique. He is best known for his work on turbo codes, which has been used extensively in mobile telephony. His current research focus is on informational neuroscience. This article is published in collaboration with the blog Binaire.

 

Claude Berrou, Informatique, IMT Atlantique, Turbocodes

Claude Berrou. Binaire. Author provided

Binaire: You started out as an electronics engineer, how did you get into computer science?

Claude Berrou: I am a science rambler. After my initial training at a graduate school that today is called Phelma, I studied a little bit of everything: electronics, signal processing, circuit architecture. Then I got into computer science… by chance, through correction codes and information theory.

Here’s a question we love to ask here at the Binaire blog, what is your definition of computer science?

CB: I have an aphorism: computer science is to the sciences what natural language is to intelligence. Before computer science, there were equations, formulas and theorems. Computer science allowed sequences of operations, processes, and procedures to be developed to process complex problems. This makes it almost synonymous with language, and it is very similar to natural language, which also requires structure. Just like when we have a common language, computer science offers languages that everyone can understand.

You worked with correction codes. Can you tell us what they are used for?

CB: When we transmit information, we want to retrieve the full message that was sent. Even if we have a lot of users and limited bandwidth. If the message is binary, due to noise and interference disturbing the line, some of the transmitted 0s will be received as 1s, and some of the 1s will become 0s. The greater the noise compared to the signal, the more frequent these kinds of errors happen. The signal-to-noise ratio can be decreased by poor weather conditions, for example, or disturbances caused by other communication taking place at the same time. With all these errors, the quality becomes very poor. To prevent this, we encode the transmitted information by adding redundancy. The challenge is to be able to retrieve the message relatively well without adding too much redundancy, without making the message too big. We have a similar problem in mass storage. Bits can switch, sometimes due to wear to the disk. We also introduce redundancy into these systems to be able to retrieve the information.

Talk to us about your wonderful invention, turbo codes.

CB: Turbo codes were born thanks to the Titanic, when we needed to achieve the transmission for viewing the wreck (work by Alain Glavieux). I played around with ways of reducing the effect of the noise in the transmission, and to deal with the errors, and I thought of introducing the principle of negative feedback in the decoding process, a classic concept in electronics.

For me, the interdisciplinary aspect is fundamental; innovation is often found at the interface of different disciplines. You take an idea that has been proven to work in one area of science, and you try to adapt it to an entirely different context. The original idea behind the turbo codes was to import an electronics technique into computer science.

When we want to create a high-gain amplifier, we put in 2 or 3 of them in a series. But this creates instable behaviour. To stabilize the arrangement, we implement a negative feedback principle: send a fraction of the amplifier’s output back to its input with the “–” sign; this reduces unwanted variations.

I started with a known algorithm: the Viterbi algorithm. It makes it possible to correct (if there is not too much noise) the errors that occur during transmission through a noisy channel, and can therefore be considered to be a signal-to-noise ratio amplifier. The Viterbi decoder exploits the algebraic law used to design the redundancy of the encoded message and uses it by means of a trellis (the deterministic equivalent of a Markov chain), thereby delivering the most probable original message. Therefore, I put two Viterbi algorithms in a series. I then tried to integrate the negative feedback concept into the decoding process. It’s a difficult task, and I was not a coding expert.

One problem was that the Viterbi algorithm makes binary choices: the bit was either switched, or it wasn’t. Along with a colleague, Patrick Adde we adapted it so that it would produce probabilistic decisions, which significantly improves the subsequent performance of the decoder.

How does it work?

CB: Like I mentioned, to protect a message, we add redundancy. The turbo code performs the coding in two dimensions. A good analogy is the grid of a crossword puzzle, with vertical and horizontal dimensions. If the definitions were perfect, only one dimension would be enough. We could rebuild the grid, for example, with only horizontal definitions. But since we do not always know what the definitions refer to, and since there can be ambiguities (due to noise, deletions, etc.), we also provide vertical definitions.

The decoding process is a little like what someone does when working on a crossword puzzle. The decoder works in a line (it uses the horizontal definitions), and then moves onto the vertical dimension. Like the crossword fan, the decoder requires several passages to reconstruct the message.

With all of these aspects, the turbo codes are effective.

We believe you. Billions of objects use this technology!

CB: Yes. All media data on 3G and 4G are protected by turbo codes.

Shannon, Claude Berrou

Claude Shannon. Binaire/Wikipédia. Author provided

This brings us to another Claude: Claude Shannon and the information theory?

CB: Yes, with this algorithm we clearly enter the realm of the information theory. In fact, I recently helped organize the symposium at IHP celebrating the centenary of Claude Shannon’s birth, which was a fascinating symposium.

Shannon demonstrated that all ideal transmission (or storage) should be accomplished using two fundamental operations. First, to reduce the message size, it is compressed to remove the maximum amount of unnecessary redundancy. Next, to protect against errors, intelligent redundancy is added.

Shannon demonstrated the limits of correction codes in 1948! Turbo codes reach Shannon’s theoretical limit, to within a few tenths of a decibel!

And now. You have moved on to neuroscience…

CB: My current research is related to informational neuroscience. You recently interviewed Olivier Faugeras, who talked to you about computational neuroscience, a fairly different approach.

Cortex. Nicolas Rougier. Author provided

My starting point is still information, but this time in the brain. The human cerebral cortex can be compared to a graph, with billions of nodes and thousands of billions of edges. There are specific modules, and between the modules are lines of communication. I am convinced that the mental information, carried by the cortex, is binary.

Conventional theories hypothesize that information is stored by the synaptic weights, the weights on the edges of the graph. I propose a different hypothesis. In my opinion, there is too much noise in the brain; it is too fragile, inconsistent, and unstable; pieces of information cannot be carried by weights, but rather by assemblie of nodes. These nodes form a clique, in the geometric sense of the word, meaning they are all connected two by two. This becomes digital information.

Is this where we will see coding and redundancy? To prevent information from getting lost in the brain, do redundancies also exist?

CB: Yes. For the traditional, analog school of thought, information is carried by the synapses. In this case, redundancy could only be achieved using repetitions: several edges would carry the same information.

According to our approach, information is encoded in the connections of a grouping of nodes. Redundancy is naturally present in this type of coding. Take a clique made up of 10 nodes on a graph. You have 45 connections in the clique. This is a large number of connections compared to the number of nodes. I base this on the Hebbian theory (1949): when neuron A sends spikes and neuron B activates systematically, the connection between A and B will be reinforced if it exists, and if it doesn’t exist it will form. Because the clique is redundant, it will resonate, and a modified connection will be reinforced: using Hebbian theory we obtain a reconstruction in the event of deterioration. We have established an entire theory based on this.

You lost us. A clique carries a piece of information. And the fact that the clique features so much redundancy ensures the information will be lasting?

CB: Yes. And furthermore, the clique can be the building block for an associative memory. I will be able to find the complete information based on certain content values. And this is due to the cliques’ highly redundant structure.

What does your work involve?

CB: I have set up a multidisciplinary team made up of neuropsychologists, neurolinguists, computer scientists, etc. We are trying to design a demonstrator, a machine based on the model of the brain as we see it, on an informational level. In a traditional computer, the memory is on one side and the processor on the other. In our machine, and in the brain, everything is interlinked.

Based on the theory we are developing (not yet fully published), mental information relies on little pieces of knowledge that are stored in the cliques. The cliques are chosen randomly. But once it has been done, they become permanent. This varies from one person to another; the same cliques do not carry the same information in different individuals. I would like to develop artificial intelligence using this machine model.

How do you see artificial intelligence?

CB: There are, in fact, two types of artificial intelligence. First, there is the kind concerned with the senses, with vision and speech recognition, for example. We are starting to be able to do this using deep learning. And then, there is the type that allows us to imagine and create, and know how to answer new questions. For now, we are not able to do this. In my opinion, the only way to make progress in this strong AI is to base it on the human cerebral cortex.

I am passionate about this subject. I would like to see it advance and continue my research for a long time to come.

 

Serge Abiteboul, Research Director at INRIA, member of the French Academy of Sciences, Affiliate Professor, Ecole Normale Supérieure Paris-Saclay and Christine Froidevaux, Computer Science Professor, Université Paris Sud – Université Paris-Saclay

The original version of this article was published in French on The Conversation France.

Product configuration, Elise Vareilles, Mines Albi, Expérience industrielle

Scientific description of industrial experience

At Mines Albi, Elise Vareilles works on “product configuration”, which entails understanding industrial constraints and considering them scientifically using IT. This multidisciplinary work is based on experts’ experience which must be recorded before the people in question retire.

 

When you order a car and specify the color, engine type, extras and delivery deadline, you are generally unaware of the complexity of the IT tool which allows these preferences. Each choice leads to constraints that have to be taken into account at each stage: ordering a more powerful engine, for example, entails having larger wheels. The software also communicates with the consumer in order to guide their choices. So, when the client wants a short deadline, only certain options are available and must be proposed in the computer interface. This discipline is called product or service configuration, and it is what Elise Vareilles and her colleagues at Mines Albi are working on.

 

Understanding expertise

“We start with businesses’ knowledge of their products and then formalize it. In other words, we determine what constraints there are,” the researcher explained, “next, we develop an IT tool to consider this knowledge and deduce rules from it in order to offer consumers something that corresponds to their needs in terms of price, options and timing.

This multidisciplinary work is on the border between industrial engineering and artificial intelligence. “This multidisciplinarity is our specificity”, confirms Elise Vareilles, who has acquired specialist knowledge in very different industrial fields. But paradoxically, this research which may seem purely technical includes a considerable amount of human sciences. Besides the technological aspects, the researchers are also interested in professionals’ expertise. “An employee with 30 years of experience knows what he has to do to optimize a process,” explains Elise Vareilles, “but this means that it is a disaster when he retires. We formalize knowledge that can only be acquired with experience.

 

The importance of knowledge

However, this knowledge is extremely difficult to formalize because experts find it difficult to explain their processes, which feel so natural to them. It is like a meal cooked by a great chef: even if we have the exact recipe and quality ingredients, we won’t get the same result because a chef’s skill is only learnt with years of practice. In the same way, when we learn to ride a bike no-one can really explain how to keep balance; it is only through practice that we become skilled enough to ride without help from our parents.

 

[box type=”shadow” align=”” class=”” width=””]

Elise Vareilles, Expérience industrielle, Mines Albi

Product configuration at the service of energy efficiency

If we want to meet the French target of thermally renovating 500,000 residences per year, the process needs to be industrialized. This is why the Agency for the Environment and Energy Efficiency (ADEME) launched the “Minimum carbon footprint and positive-energy buildings and blocks” call for expressions of interest in 2012 for the external insulation of a building containing 110 social housing units in Saint-Paul-lès-Dax (Landes). This project, called Criba, aims to “develop an industrialized technical solution for renovating multi-unit housing”. Elise Vareilles and her team developed a software program to help architects design and draw the building after renovation. “We developed an algorithm proposing different renovation solutions,” Elise Vareilles explained. “to do so, we based our work on photos of the different sides of the building taken by drones, which allowed us to build a 3D digital model that we enhanced with various architectural data. Lastly, we took different constraints into account, such as the local urbanism plan, the operator’s restrictions (in particular their budget) or architectural requirements.” The project was launched in 2013, will be completed in January 2017 and will cost €8 million.[/box]

 

Product or service configuration can be applied to a number of processes, such as choosing a complex product like a car, or the optimization of industrial processes with a large number of constraints, as Elise Vareilles did with a process of heat treatment for car gears. Such configuration also includes helicopter maintenance, airplane design or external renovation of buildings (see inset) etc. Even medicine can benefit from it. “We have configured the treatment program for pregnant women at the University Hospital Centre in Toulouse,” the researcher explained, “If we see that a patient is diabetic, for example, the whole program is automatically adjusted to include specific appointments, such as regular blood tests.

 

Knowledge is capital

This research is very multidisciplinary. Of course, it requires skills in IT and artificial intelligence, but the researchers must also acquire knowledge in the subjects involved, such as heat treatment, engineering and aeronautics. Last but not least, interviewing experts almost comes under sociology. The researcher feels strongly about this point: “experts’ knowledge is vital capital that must be preserved and passed on before it is too late. Once the experts leave, there is nothing I can do.

 

Elise Vareilles, Mines Albi

Élise Vareilles promotes science among young girls

After a DEA (the ancestor of the Research Master’s 2) in IT at Paul Sabatier University in Toulouse, Elise Vareilles had the opportunity to do a PhD on a European project about a heat treatment process. It was perfect for her, given that she prefers working on concrete projects rather than theoretical IT. “It’s motivating to know what it’s used for!” Thanks to this project she acquired knowledge in industrial engineering which helped her join Mines Albi in 2005. It is a choice she does not regret: “the work changes every day, and we meet lots of people. We write code, but do other things as well.” Elise Vareilles is also very committed to promoting science among young girls in the Elles Bougent and Women in Aerospace associations. “It’s important that girls don’t limit themselves,” she highlighted, “I go into high schools and am shocked by the beliefs held by some of them, who say they are not good enough to continue studying!

 

 

video, la rotonde, vulgarisation

Video, the chameleon-like medium of scientific culture

In recent years, a real passion for science videos has emerged, especially with the rise of YouTube. Video is a comprehensive and complex tool that adapts to different audiences. This explains why video holds such a prominent place as a vehicle of scientific culture at CCSTI La Rotonde (Mines Saint-Étienne). It also explains why I’MTech now shares audiovisual content.

 

Whether it’s a light, funny commercial or a serious and thoughtful documentary, video is an effective and appealing tool for popularizing science. It can bring the viewer something real, and is accessible for all ages. This chameleon-like medium can stand alone, or complement other forms of media. Video is central to the technical and industrial scientific culture at La Rotonde, Mines Saint-Étienne, and has been a tool of choice for the 18 years of the organization’s existence. Its director, Guillaume Desbrosse, talks to us about this medium’s presence in the scientific culture.

 

How important is video at La Rotonde?

Guillaume Desbrosse: Since our organization was established, we have been developing video as a tool. It is part of our DNA. This is fairly unusual, since video skills are not easily developed. The structure quickly began partnering with professional and amateur videographers to produce scientific culture resources on video. We use it fairly regularly. We also use video in the exhibitions we organize. It is an extremely interesting medium for reaching an audience that is very accustomed to screens and animated images.

 

In recent years, we have seen a rise in amateur videographers, how are they contributing to scientific culture today?

GD: This love of videos has exploded over the past years thanks to YouTube. Amateur videographers are now beginning to produce content they intend to broadcast on a wide range of subjects, and scientific culture has not been left behind, although there are varying levels of quality. At La Rotonde, we have followed this closely; we considered it significant since other players outside the institutional realm were beginning to disseminate the sciences. Their success, as seen in the number of subscribers, comments, and number of views, made us think. How can we encourage these projects – which play an important role in today’s scientific culture – and support them?

This is why, along with Mines Saint-Étienne, we decided to renovate a building that became studio Papaï (Plug And Play Audio and Image) based on a project by engineering students. Beyond the in-house use of this studio (engineering students, teachers, researchers…), it is also open to amateur videographers. We offer to host them for a residency so that they can produce resources on scientific culture. Currently, we are supporting the YouTube channel “Balade Mentale”, which addresses a wide variety of themes, ranging, for example, from the sense of smell to the study of motion. This channel is run by dynamic group of young people, and is gaining popularity, with a little over 30,000 subscribers. We have decided to support this channel for one year. We also support their joint projects with Florence Porcel, who popularizes the science of the universe. Following the success of this joint effort, we decided to organize calls for projects to make this opportunity available to other videographers.

 

Does this enthusiasm for online science videos reinforce the idea that we do not need a lot of resources to do something that is successful?

GD: The ideas are what drive the projects. One issue is scientific credibility. When an audience comes to view an exhibition at La Rotonde, they know that it is an official structure, and that the content has been approved by a scientific committee. It’s important to have this rigorous aspect. Of course, the more we see scientific culture online, the happier we are, but it must also be done in accordance with professional standards, and with good quality content. This is the case for many CSTI channels. Then, there is the question of the legal status of these creators, which is a key issue. What is the appropriate economic model for these creators? What is the most suitable positioning in the professional audiovisual landscape? I am against using their creations and skills without fair compensation. We are working on a model that would provide them with quality assistance and support them decently!

 

In general, does video allow you to expand your target audience?

GD: What is interesting is that there is not one medium that is better than all the others, since they each target a specific audience. We must be active on all fronts, and the video tool offers many advantages in this area. However, there are also constraints involved, this medium is very time-consuming in terms of the editing, filming, lighting… If you want to give your audience something to look at, you need a lot of images when you produce a report. However, in general, video has the advantage of not being exclusive; it is often fun and can offer a variety of formats.

 

Is the ability to offer several video formats an advantage?

GD: We have produced long videos, with 30-minute reports, and short video clips that only last one minute. We like to play around with the different types of media and, for me, video is an exciting and very interesting medium for this reason. It works a little like a test laboratory. When they were starting out, Balade Mentale used a format that took a long time to produce, and required a lot of work. They then moved onto the “facing the camera” format, which enabled them to become faster and more efficient. They improved by using more humor and giving their videos more “punch”.

So, it really depends on who we are targeting. I know the current trend is to use short formats. People like watching something fast. It has to catch their interest, and if it’s not funny 4 seconds in, they turn it off. But we need to give people the possibility of using longer formats, in which we take the time we need, and provide a different relationship between the videographer and audience. We should not avoid these formats by giving into the trend towards quick consumption. Humankind is rich and diverse, and we also need to respect that fact.

mySMARTLife, Nantes

In Nantes, the smart city becomes a reality with mySMARTlife

Projets européens H2020Alongside European smart city champions like Barcelona, Copenhagen and Stockholm, France boasts a few gems of its own. One such city is Nantes, a participant in the European H2020 research project mySMARTlife since December 1st, 2016. Thanks to this project, the capital of the Pays de la Loire region plans to put its scientific heritage to good use, represented by IMT Atlantique, as it continues its transformation into a smart city.

 

When searching for proof that major French cities are truly transitioning to become smart cities, look no further than Nantes. For several years now, the city has been engaged in a transformational process, turning the concept of the city of the future into a reality. This is a role and ambition the city has taken to the European level, with Johanna Rolland, the Mayor of Nantes and President of Nantes Métropole, serving on the Executive Committee of Eurocities. This network of cities — also chaired by the Mayor of Nantes from 2014 to 2016 — advocates with European authorities for the interests of major metropolitan areas, and includes some big names among smart cities: Barcelona, Stockholm and Milan, etc. In short, at a time when few European cities can claim to be undertaking tangible measures towards becoming smart cities, Nantes can boast of being a pioneer in this area.

On December 1st, 2016, this characteristic was further enhanced with the official launch of the H2020 mySMARTlife research project. As proof of the position that Nantes holds in the European ecosystem of cities of the future, the city is now working alongside Hamburg and Helsinki as pilot cities for the project. At the local level in Nantes, MySmartLife is aimed at modernizing several of the city’s major areas of governance, particularly in terms of energy and transport. More specifically, one of the objectives is to “have a platform for Nantes Métropole, and its associated stakeholders,[1] to enable new services to be developed and monitored and to provide decision-support,” explains Bruno Lacarrière, researcher at IMT Atlantique. The institution is participating in this H2020 project, and offers dual expertise: in both energy efficiency related to heating networks and in offering decision-support. This expertise is provided by the Department of Energy Systems and Environment (member of the UMR CNRS 6144 GEPEA) for Nantes, and by the Department of Logics in Uses, Social Science and Management (member of the UMR CNRS 6285 LAB-STICC) for Brest.

 

Optimizing the city’s energy efficiency

The researchers from the IMT Atlantique Department of Energy Systems and Environment will specifically provide their knowledge in energy efficiency and system analysis, applied to heating networks. “Our skills in the field allow us to model these systems with an integrated approach that goes beyond thermal-hydraulic studies, for example,” explains Bruno Lacarrière. “We do not only model pipes, but an entire set of connected technological objects,” he continues. The researchers take into account the variety of systems that can provide heat sources for the network (boilers, cogeneration units, geothermal energy, recovering surplus industrial heat…), and the diversity of the consumers connected to the network. All of the heating network components are therefore integrated into the researchers’ models. This approach, which is complex because it is based on a comprehensive view of the network, makes it possible to better assess the areas for improvement in optimizing energy efficiency, and to better predict the consequences, for example, of renovating a building.

The researchers will greatly benefit from their proximity to the industrial partners in this project. To develop their models, they need field data such as heat output measurements from various points in the network. “This data is difficult to obtain, because in this case the system is connected to several hundred buildings,” Bruno Lacarrière points out. Furthermore, this information is not public. “Being able to work with stakeholders on the ground, such as Erena (Engie subsidiary and the network operator in Nantes), is therefore a real advantage for us, provided, of course, that the necessary confidentiality clauses are established,” the researcher adds.

 

No smart cities without decision support 

At the same time, the role of the Department of Logics in Uses, Social Science and Management is to develop decision-support tools, an important aspect in many of the smart city’s activities. This is true for mobility and transport, as Bruno Lacarrière points out: “In the context of the boom in electric vehicles, one application of decision-support is providing users with the nearest locations of available charging stations in real time.” Decision-support can also be used by public authorities to determine the best location for charging stations based on the configuration of the infrastructures and electrical distribution. “This is where having a platform becomes truly valuable: the information is centralized and made available to several stakeholders,” the researcher explains.

While the two types of expertise provided by IMT Atlantique are different in terms of research, they are very much complementary. Decision-support can, for example, use information obtained via the heating network models to propose new buildings to be connected to the network, or to study the deployment of new production sources. On the other hand, the results from decision-support based on several criteria (often related to various stakeholders) help to define new modeling scenarios for the networks. The researchers in energy efficiency and those in decision-support therefore complement each other through the platform, and provide support to the different stakeholders in the decisions they must make.

 

Ensuring the transformations are here to stay

While the mySMARTlife project will last five years, all the project’s actions — including rolling out the platform — must be completed within the first three years. The last two years will be dedicated to assessing the various actions, studying the impacts and making revisions if necessary. “For example, the first three years could be spent implementing an optimized energy management system, and the two follow-up years would provide feedback on the actual optimization. It is necessary to have sufficient hindsight, spanning several heating seasons,” explains Bruno Lacarrière.

The platform’s specific features must still be determined, and this will be the partners’ first mission. Because although it will initially be a demo platform, it is intended to remain after the project has ended. Therefore, planning must be done ahead of time to determine what form it will take, specifically so that industrial partners, as well as public authorities and final users, can make the most of it. Through this H2020 project, the European Commission is therefore planning to develop concrete actions that are made to last.

 

From a focus on Nantes to an international perspective

The work will initially focus on the Île de Nantes, located at the heart of the city on the Loire river. However, because certain heating and transportation networks are not confined to this area alone, the project will already be expanded to include other areas of the city. For example, the energy used by the Île de Nantes area is partially produced outside the district’s boundaries, therefore, the geographic area used for the models must be expanded. Several actions involving other zones in the metropolitan area are already planned.

Furthermore, the mySMARTlife project should not be seen solely as an effort to modernize a few areas of Nantes and the other two pilot cities. Brussels’ desire to ensure the sustainability of the actions over time is also related to its stated intention to ensure the scaling-up of the results from mySMARTlife. The key challenge is to produce knowledge and results that can be transferred to other urban areas, in France and abroad. This explains the advantage of entrusting the H2020 project management to Helsinki and Hamburg, in addition to Nantes.

By working together with the partners from these other two cities, the researchers will be able to validate their models by applying them to other major metropolitan areas. They will also attempt to test the validity of their work in smaller cities, since the project also includes the cities of Bydgoszcz (Poland), Rijeka (Croatia), Varna (Bulgaria) and Palencia (Spain). “The project is therefore aimed at demonstrating the implemented technology’s capacity to mass produce the actions used to develop a smart city,” the researcher points out. A key challenge in transforming cities is to make the transition to a smart city available not only to major metropolitan areas that are technologically advanced in this area, but also to smaller cities.

 

[1]­ At the local level in Nantes, Nantes Métropole will be supported by nine different partners: IMT Atlantique, Nantes Métropole Habitat, la Semitan, Armines, Atlanpole, Cerema, Engie and Enedis.

 

 

 

 

Fine particulate matter pollution peaks, Véronique Riffautl, IMT Lille Douai

Particulate matter pollution peaks: detection and prevention

By Véronique Riffault, Professor of Atmospheric Sciences, IMT Lille Douai – Institut Mines-Télécom
This article was originally published in French in The Conversation France.

[divider style=”normal” top=”5″ bottom=”5″]

 

This winter, France and a large part of Europe were struck by episodes of particulate matter pollution. These microscopic particles are known as PM2.5 and PM10 when they measure less than 2.5 or 10 micrometers (µm) in diameter respectively.

They are proven to be harmful to human health because they enter our respiratory system, and the smallest can even enter our blood flow. According to the European Environment Agency, air pollution is the cause of 467,000 premature deaths annually in Europe.

These particles can come from natural sources (sea salt, volcanic eruptions, forest fires etc.) or human activities (transport, heating, industry etc.)

 

What is a pollution peak?

Pollution peaks occur when regulatory warning thresholds, as defined in 2008 by the European Union and transposed to French law in late 2010, are exceeded.

In virtue of these regulations, the first level of severity (known as the “public information and warning threshold”) is reached for PM10 particles when there are ≥50 µg per cubic meter of air (m³) in the atmosphere; the warning level is reached at ≥80 µg/m³.

There is no trigger limit for PM2.5, but just a set maximum amount of 25 µg/m³ on average per year.

However, these regulations have serious limitations. The “mass” concentration thresholds which indicate the total mass of particles in the air and which are used to assess the danger of particulate matter pollution are higher than the levels recommended by the WHO, which have been set for PM10 at 20 µg/m³ on average per year and 50 µg/m³ on average per day, in order to take account of chronic and short-term exposure.

In addition, the only parameter taken into account in European and French regulations concerns mass concentration. The concentration in terms of number (i.e. the number of particles per m³ of air), and the chemical composition are not taken into account for the triggering of warnings.

Lastly, there are no regulations for very small particulate matter (less than 1 µm), which is mainly produced by human activity, even though it is potentially the most harmful.

 

Comparison of the size of microscopic particles with a hair and grain of sand. US-EPA

 

How are they detected?

In France, the Ministry for the Environment has delegated the task of monitoring air quality and regulated pollutants across the country to certified associations united under Fédération Atmo France. They are supported in this task by the Central Laboratory for the Monitoring of Air Quality.

These associations put in place automatic measurements for the concentration of pollutants, as well as other monitoring measures to allow a better understanding of the phenomena observed, such as the chemical composition of particles, or weather conditions.

These measurements can be combined with approaches for modeling particle concentration, thanks in particular to Prevair, the French forecast platform. Calculating the history of air mass can also be used to reveal the origin of particles, and it is therefore now possible to describe the phenomena at the origin of the increase in concentrations in relative detail.

 

Explanation of a real case

The graph below, produced from observations by our research department and measurements by Atmo Hauts-de-France, illustrates an example of pollution peaks that affected the local area in January 2017.

During this period, anticyclonic weather conditions contributed to the stagnation of air masses above pollutant-emitting areas. In addition, cooler temperatures led to an increase in emissions (notably linked to domestic wood heating) and the formation of “secondary” particles which formed after chemical reactions in the atmosphere.

Data V. Riffault/SAGE (Cappa and Climibio projects), CC BY-NC-ND

 

The graphs show changes in mass concentrations of PM10 and PM2.5 over a period of several days at the Lille Fives monitoring station, as well as changes in several chemical species measured in PM1 4 km away on the University of Lille campus.

We can see that almost all the particles fell within the PM2,5 proportion, something which rules out natural phenomena such as a dust being blown in from deserts, since such particles mainly fall within the range of 2.5 to 10 µm. Furthermore, the particles in question are generally smaller in size than 1 µm.

The pollution episode began on the evening of Friday January 21 and continued throughout weekend, in spite of a lower level of road traffic. This can be explained by an increase in wood burning (as suggested by the m/z 60 tracer, which is a fragment of levoglucosan, a molecule emitted by pyrolysis of cellulose found in wood).

Wood burning and other forms of combustion (such as traffic or certain industries) also emit nitrogen dioxide (NO2) as a gas, which can turn into nitric acid (HNO3) through a reaction with hydroxyl radicals (•OH) in the atmosphere.

At sufficiently low temperatures, HNO3 combines with ammonia (NH3) produced by farming activity to form ammonium nitrate (NH4NO3) solid. These are known as “secondary particles”.

A slight decrease in concentrations of particulate matter was observed at the end of the weekend, with more favorable weather conditions for the dispersion and elimination of pollutants.

In this episode, the very low concentrations of sulfates rule out an impact from coal power stations in Germany and Eastern Europe. It is therefore definitely a question of local and regional pollution linked to human activity and which accumulated as a result of unfavorable weather conditions.

 

How can this be avoided?

Since we cannot control the weather conditions, levers of action are primarily based on reducing pollutant emissions.

For example, reducing the formation of secondary particles will entail limiting NO2 emissions linked to road traffic through road space rationing measures; for NH3 emissions, action must be taken regarding farming practices (spreading and rearing methods).

Concerning emissions from wood heating, replacing older devices with cleaner ones will enable better burning and fewer particulate matter emissions; this could be accompanied by an investment in housing insulation.

But these measures should not make us forget populations’ chronic exposure to concentrations of particulate matter which exceed the recommended WHO thresholds. This type of pollution is insidious and is damaging to health in the medium and long term, notably with the development of cardio-vascular and respiratory diseases and lung cancer.

 

 

CES, CES 2017, Sevenhugs, start-up, innovation

CES: once the show is over, what do start-ups get out of it?

Several weeks after the CES, what remains of this key event for digital innovation? In addition to offering participants a stage for presenting their products, the event provides a place for intense networking and exchanges with future users. For start-ups, the CES accelerates their path towards leaving the incubator and provides a major boost in developing their brand.

 

Let’s take a quick trip back in time to January 9, 2017. The Consumer Electronics Show, better known as the CES, has just opened its doors in Las Vegas, triggering an avalanche of technology amid a flurry of media attention. Over the course of this 4-day event, the start-up and digital technology ecosystems buzz with activity. On January 12, the CES then came to a close. One week later, the return to normal can seem quite abrupt following a show that monopolized the attention of the media and technology stakeholders during its short existence. So, was it just a fleeting annual event? Are start-ups merely heading home (those who do not live in the nearby “valley”) after a short-lived fling?

Of course not! Despite the event’s ephemeral nature, start-ups come away with both medium- and long-term benefits. For Sevenhugs, 2017 was its third consecutive year participating in the event. The start-up from an incubator at Télécom ParisTech has presented two products at CES since 2015. It began by hugOne, a product for monitoring and optimizing sleep, followed by the Smart Remote, a multipurpose remote. Announcing new products at the event means, first of all, increasing press coverage, and therefore visibility, in a very competitive ecosystem. But it also means the start-ups benefit from meeting after meeting with business partners.

During CES, we had meetings with distributors, retailers and potential partners every 30 minutes,” explains Louise Plaquevent, Marketing Director at Sevenhugs. “With so many of these different professionals in the same place, it is possible to establish a lot of contacts that will be helpful throughout the year as we look for partners in Europe and the United States,” she adds. Therefore, CES also represents a springboard for entering the American and global market, which would be less accessible without this gathering.

 

Presenting a product to get critical feedback

Louise Plaquevent also points out that participating at CES exposes the products to the public, resulting in “comments and opinions from potential customers, which helps us improve the products themselves.” The Smart Remote was therefore presented to the public twice in Las Vegas: first in 2016, then again in 2017 as an updated version.

Michel Fiocchi, Director of Entrepreneurship at Mines Saint-Étienne, also shares this view. His role is to provide technological support to the school’s start-ups, founded by students and researchers. “For two of our start-ups — Swap and Air Space Drone — their participation at CES allowed them to refocus their products on clearly identified markets. Through conversations with potential users, they were able to make changes to include other uses and refine their technology,” he explains.

The event in Las Vegas provides a boost for the young entrepreneurs’ projects. Their development is accelerated through the contacts they establish and the opportunity to expose their products to users. For Michel Fiocchi, there is no doubt that participating at CES helps start-ups on their way to leaving the incubator: “There is a very clear difference in the dynamics of start-ups that have participated and those that haven’t,” he stresses.

Finally, participating at this major digital show offers benefits that are difficult to calculate, but may be just as valuable. Louise Plaquevent reminds us, in conclusion, that despite the event’s short duration, it is an intense experience for all the companies that make the trip. She points out that “CES allows us to get to know each other, and unites the teams.” This aspect is particularly important for these smaller companies with fewer employees.

 

Prix Académie des Sciences, Prize

The new IMT–French Académie des Sciences Prize will recognize the excellence of researchers in digital technology, energy and the environment

On March 28, IMT and the French Académie des Sciences signed an agreement establishing a new prize aimed at rewarding exceptional scientific contributions in Europe. The first call for applications has now been launched. The deadline for submitting applications is set for May 23, 2017.

 

The fields

The prize will award European researchers in three different fields:

  • The sciences and technologies of the digital transformation in industry;
  • The sciences and technologies of the energy transition;
  • Environmental engineering.

 

Two scientists honored

The IMT–French Académie des Sciences prize is comprised of two awards:
a Grand Prix awarded to a scientist who has contributed to the fields mentioned above in an exceptional manner through a body of particularly remarkable work;
a Young Scientist Prize awarded to a scientist who is under 40 years old on January 1st of the year the prize is awarded, who has contributed to these same fields through a major innovation.

These prizes will be presented jointly by IMT, with support from Fondation Mines-Télécom and the French Académie des Sciences. They will be endowed with prize money of the following amounts:
Grand Prix: €30,000;
Young Scientist Prize: €15,000.

Each prize will be awarded, without any requirements regarding nationality, to a scientist working in France, or in Europe, in close collaboration with French teams.

 

The application must include the following:

1) the form provided by the French Académie des Sciences;
2) a letter of support providing a personal opinion of the nominee;
3) a brief résumé;
4) the nominee’s main scientific results;
5) a list of the main publications.
Find out more

 

The Official Awards Ceremony

The formal award ceremony will take place in the Dome of the Academy on October 10, 2017. It will be completed by a ceremony in mid-October for the recipients to present their work to the Academy.

According to Sébastien Candel, President of the French Académie des Sciences: “The creation of these two prizes works towards fulfilling several of the Academy’s missions. One of these missions is to encourage the scientific community, by not only awarding established researchers, but also the most promising young researchers. This encouragement is all the more significant due to the prize being awarded as part of a partnership that unites the French Académie des Sciences and another institution, as in this specific case with IMT. These prizes also strengthen the international dimension of French science and our Academy, because they do not have any requirements regarding nationality, but are open to a scientist working in France, or in Europe, with the condition of a close collaboration with French teams.

According to Christian Roux, Executive Vice President for Research and Innovation at IMT: “By creating these two scientific prizes, awarded jointly by the French Académie des Sciences, IMT is seeking to honor talent, promote partner-based research with companies, and encourage the emergence of innovations and breakthrough approaches. It is also an opportunity for the Institut to gain more visibility and attractiveness in the national and international landscapes of higher education and research.”

industrie, e.l.m leblanc, valorisation

e.l.m leblanc and IMT sign a framework agreement on the industry of the future

Both partners are strongly involved in issues related to the industry of the future, and have decided to pool their efforts and formalize their collaboration.

A lasting partnership based on three components: research, R&D and foresight.

1.    Develop cutting-edge research in the following fields :

  • digital technologies,
  • acoustics,
  • Big Data and Machine Learning,
  • Industry of the Future,

2.    Develop innovative joint R&D projects on key industrial challenges such as automation, Big Data, the Internet of Things, security, as well as the production line, logistics and transport, human-machine cooperation and intelligent agents.

3.    Engage in future-oriented reflection and research on humankind’s place in the digital and industrial transitions, and create and develop new paradigms (business models, new organizational forms, new products and services) for the industry of the future.

 

The partners’ perspectives

For IMT teams, the cooperation with e.l.m. leblanc represents an outstanding full-scale and operational testing ground for production. As a company that is very advanced in its own industrial transformation, e.l.m. leblanc is the ideal player for confronting IMT’s research with the practical issues that arise from the changes currently underway in the industry of the future.

In the short term, researchers will work with e.l.m. leblanc teams on issues related to the metallurgy of stainless steel, to big data (maintenance and predictive manufacturing), “factory service” prediction, industrial IoT and the Smart Factory (analysis and modeling of logistic flows, changes to the workstation and the production line).

According to Christian Picory-Donné, Director of Partnerships and Transfers at IMT, “these projects will lay the foundations for many different forms of collaboration that will of course bring together R&D, but also initial and lifelong learning, foresight and think-tanks. This will provide many cross-fertilization opportunities with this major industrial partner.”

According to e.l.m. leblanc, the upheavals taking place in the world due to new technologies will only continue and increase in the coming decades. As an industrial company, e.l.m. leblanc is facing an unprecedented challenge. The company is concerned about its present and future ability to understand these constant waves of changes – specifically changes in business models and production methods.

According to Philippe Laforge, Chief Executive of e.l.m. leblanc “Our partnership with a leading academic player like IMT is essential in this dynamic and uncertain context. IMT provides us with its expertise, not only in terms of the latest developments, but also in research applied to the challenges of our profession. This collaboration, resulting from our regional and national foundations, is at the heart of e.l.m.’s development strategy in France”.

machine learning

What is machine learning?

Machine learning is an area of artificial intelligence, at the interface between mathematics and computer science. It is aimed at teaching machines to complete certain tasks, often predictive, based on large amounts of data. Text, image and voice recognition technologies are also used to develop search engines and recommender systems for online retail sites. More broadly speaking, machine learning refers to a corpus of theories and statistical learning methods, which encompass deep learning. Stephan Clémençon, a researcher at Télécom ParisTech and Big Data specialist, explains the realities hidden behind these terms.

 

What is machine learning or automatic learning?

Stéphan Clémençon: Machine learning involves teaching machines to make effective decisions within a predefined framework, using algorithms fueled by examples (learning data). The learning program enables the machine to develop a decision-making system that generalizes what it has “learned” from these examples. The theoretical basis for this approach states that if my algorithm searches a catalogue of decision-making rules that is “not overly complex” and that worked well for sample data, they will continue to work well for future data. This refers to the capacity to generalize rules that have been learned statistically.

 

Is machine learning supported by Big Data?

SC: Absolutely. The statistical principle of machine learning relies on the representativeness of the examples used for learning. The more examples are available, and hence learning data, the better the chances of achieving optimal rules. With the arrival of Big Data, we have reached the statistician’s “frequentist heaven”. However, this mega data also poses problems for calculations and execution times. To access such massive information, it must be distributed in a network of machines. We now need to understand how to reach a compromise between the quantity of examples presented to the machine and the calculation time. Certain infrastructures are quickly penalized by the large proportions of the massive amounts of data (text, signals, images and videos) that are made available by modern technology.

 

What exactly does a machine learning problem look like?

SC: Actually, there are several types of problems. Some are called “supervised” problems, because the variable that must be predicted is observed through a statistical sample. One major example of supervised learning from the early stages of machine learning, was to enable a machine to recognize handwriting. To accomplish this, the database must be provided with many “pixelated” images, while explaining to the machine that it is an “e”, an “a”, etc. The computer was trained to recognize the letter that was written on a tablet. Observing the handwritten form of a character several times improves the machine’s capacity to recognize it in the future.

Other problems are unsupervised, which means that labels are available for the observations. This is the case, for example, in S-Monitoring, which is used in predictive maintenance. The machine must learn what is abnormal in order to be able to issue an alert. In a way, the rarity of an event replaces the label. This problem is much more difficult because the result cannot be immediately verified, a later assessment is required, and false alarms can be very costly.

Other problems require a dilemma to be resolved between exploring the possibilities and making use of past data. This is referred to as reinforcement learning. This is the case for personalized recommendations. In retargeting, for example, banner ads are programmed to propose links related to your areas of interest, so you will click on them. However, if you are never proposed any links related to classical literature, on the pretext that you do not yet have any search history in this subject, it will be impossible to effectively determine if this type of content would interest you. In other words, the algorithm will also need to explore the possibilities and no longer use data alone.

 

To resolve these problems, machine learning relies on different types of models, such as artificial neural networks; what does this involve?

SC: Neural networks are a technique based on a general principle that is relatively old, dating back to the late 1950s. This technique is illustrated by the operating model of biological neurons. It starts with a piece of information – the equivalent of a stimulation in biology – that reaches the neuron. Whether the stimulation is above or below the activation threshold will determine whether the transmitted information triggers a decision/action. The problem is that a single layer of neurons may produce a representation that is too simple to be used to interpret the original input information.

By superimposing layers of neurons, potentially with a varying number of neurons in each layer, new explanatory variables are created, combinations resulting from the output of the previous layer. The calculations continue layer by layer until a complex function has been obtained representing the final model. While these networks can be very predictive for certain problems, it is very difficult to interpret the rules using the neural networks model; it is a black box.

 

We hear a lot about deep learning lately, but what is it exactly?

SC: Deep learning is a deep network of neurons, meaning it is composed of many superimposed layers. Today, this method can be implemented by using modern technology that enables massive calculations to be performed, which in turn allow very complex networks to adapt appropriately to the data. This technique, in which many engineers in the fields of science and technology are very experienced, is currently enjoying undeniable success in the area of computer vision. Deep learning is well suited to the field of biometrics and voice recognition, for example, but it shows mixed performances in handling problems in which the available input information does not fully determine the output variable, as is the case in the fields of biology and finance.

 

If deep learning is the present form of machine learning, what is its future?

SC: In my opinion, research in machine learning will focus specifically on situations in which the decision-making system interacts with the environment that produces the data, as is the case in reinforcement. This means that we will learn on a path, rather than from a collection of time invariant examples, thought to definitively represent the entire variability of a given phenomenon. However, more and more studies are being carried out on dynamic phenomena, with complex interactions, such as the dissemination of information on social networks. These aspects are often ignored by current machine learning techniques, and today are left to be handled by modeling approaches based on human expertise.