space, René Garello, IMT Atlantique

Climate change as seen from space

René Garello, IMT Atlantique – Institut Mines-Télécom

[divider style=”normal” top=”20″ bottom=”20″]

[dropcap]T[/dropcap]he French National Centre for Space Research has recently presented two projects based on greenhouse gas emission monitoring (CO2 and methane) using satellite sensors. The satellites, which are to be launched after 2020, will supplement measures carried out in situ.

On a global scale, this is not the first such program to measure climate change from space: the European satellites from the Sentinel series have already been measuring a number of parameters since Sentinel-1A was launched on April 3, 2014 under the aegis of the European Space Agency. These satellites are part of the Copernicus Program (Global Earth Observation System of Systems), carried out on a global scale.

Since Sentinel-1A, the satellite’s successors 1B, 2A, 2B and 3A have been launched successfully. They are each equipped with sensors with various functions. For the first two satellites, these include a radar imaging system, for so-called “all weather” data acquisition, the radar wavelength being indifferent to cloudy conditions, whether at night or in the day. Infrared optical observation systems allow the second two satellites to monitor the temperature of ocean surfaces. Sentinel-3A also has four sensors installed for measuring radiometry, temperature, altimetry and the topography of surfaces (both ocean and land).

The launch of these satellites builds on the numerous space missions that are already in place on a European and global scale. The data they record and transmit grant researchers access to many parameters, showing us the planet’s “pulse”. These data partially concern the ocean (waves, wind, currents, temperatures, etc.) showing the evolution of large water masses. The ocean acts as an engine to the climate and even small variations are directly linked to changes in the atmosphere, the consequences of which can sometimes be dramatic (hurricanes). Data collected by sensors for continental surfaces concern variations in humidity and soil cover, whose consequences can also be significant (drought, deforestation, biodiversity, etc.).

[Incredible image from the eye of #hurricane #Jose taken on Saturday by the satellite #Sentinel2 Pic @anttilip]

Masses of data to process

Processing of data collected by satellites is carried out on several levels, ranging from research labs to more operational uses, not forgetting formatting activity done by the European Space Agency.

The scientific community is focusing increasingly on “essential variables” (physical, biological, chemical, etc.) as defined by groups working on climate change (in particular GCOS in the 1990s). They are attempting to define a measure or group of measures (the variable) that will contribute to the characterization of the climate in a critical way.

There are, of course, a considerable number of variables that are sufficiently precise to be made into indicators allowing us to confirm whether or not the UN’s objectives of sustainable development have been achieved.

space

The Boreal AJS 3 drone is used to take measurements at a very low altitude above the sea

 

The identification of these “essential variables” may be achieved after data processing, by combining this with data obtained by a multitude of other sensors, whether these are located on the Earth, under the sea or in the air. Technical progress (such as images with high spatial or temporal resolution) allows us to use increasingly precise measures.

The Sentinel program operates in multiple fields of application, including: environmental protection, urban management, spatial planning on a regional and local level, agriculture, forestry, fishing, healthcare, transport, sustainable development, civil protection and even tourism. Amongst all these concerns, climate change features at the center of the program’s attention.

The effort made by Europe has been considerable, representing an investment of over €4 billion between 2014 and 2020. However, the project also has very significant economic potential, particularly in terms of innovation and job creation: economic gains in the region of €30 million are expected between now and 2030.

How can we navigate these oceans of data?

Researchers, as well as key players in the socio-economic world, are constantly seeking more precise and comprehensive observations. However, with spatial observation coverage growing over the years, the mass of data obtained is becoming quite overwhelming.

Considering that a smartphone contains a memory of several gigabytes, spatial observation generates petabytes of data to be stored; and soon we may even be talking in exabytes, that is, in trillions of bytes. We therefore need to develop methods for navigating these oceans of data, whilst still keeping in mind that the information in question only represents a fraction of what is out there. Even with masses of data available, the number of essential variables is actually relatively small.

Identifying phenomena on the Earth’s surface

The most recent developments aim to pinpoint the best possible methods for identifying phenomena, using signals and images representing a particular area of the Earth. These phenomena include waves and currents on ocean surfaces, characterizing forests, humid, coastal or flooding areas, urban expansion in land areas, etc. All this information can help us to predict extreme phenomena (hurricanes), and manage post-disaster situations (earthquakes, tsunamis) or monitor biodiversity.

The next stage consists in making processing more automatic by developing algorithms that would allow computers to find the relevant variables in as many databases as possible. Intrinsic parameters and information of the highest level should then be added into this, such as physical models, human behavior and social networks.

This multidisciplinary approach constitutes an original trend that should allow us to qualify the notion of “climate change” more concretely, going beyond just measurements to be able to respond to the main people concerned – that is, all of us!

[divider style=”normal” top=”20″ bottom=”20″]

René Garello, Professor in Signal and Image Processing, “Image and Information Processing” department, IMT Atlantique – Institut Mines-Télécom

The original version of this article was published on The Conversation.

Young Scientist Prize, julien bras, biomaterial

Julien Bras: nature is his playground

Cellulose is one of the most abundant molecules in nature. At the nanoscale, its properties allow it to be used for promising applications in several fields. Julien Bras, a chemist at Grenoble INP, is working to further develop the use of this biomaterial. On November 21st he received the IMT-Académie des Sciences Young Scientist Prize at the official awards ceremony held in the Cupola of the Institut de France.

 

Why develop the use of biomass?

Julien Bras: When I was around 20, I realized that oil was a resource that would not last forever, and we would need to find new solutions. At that time, society was beginning to become aware of the problems of pollution in cities, especially due to plastics, as well as the dangers of global warming. So I thought we should propose something that would allow us to use the considerable renewable resources that nature has to offer. I therefore went to an engineering school in chemistry on developing the use of agro-resources, and then did a thesis for Ahlstrom on biomaterials.

What type of biomaterials do you work with?

JB: I work with just about all renewable materials, but especially with cellulose, which is a superstar in the world of natural materials. Nature produces hundreds of billions of tons of this polymer each year. For thousands of years, it has been used to make clothing, paper, etc. It is very well known and offers numerous possibilities. Although I work with all biomaterials, I am specialized in cellulose, and specifically its nanoscale properties.

What makes cellulose so interesting at the nanoscale?

JB: There are two major uses for cellulose at this scale. We can make cellulose nanocrystals, which have very interesting mechanical properties. They are much more solid than glass fibers, and can be used, for example, to reinforce plastics. And we can also design nanofibers, which are longer and more flexible than the crystals, which are easily tangled. This makes it possible to make very light, transparent systems covering a large surface. In one gram of nanofiber, the available surface area for exchange can reach up to two hundred square meters.

In which industry sectors do we find these forms of nanocellulose? 

JB: For now, few sectors really use them on a large scale. But it’s a material that is growing quickly. We do find nanocellulose in a few niche applications, such as composites, cosmetics, paper and packaging. Within my team, we are leading projects with a wide variety of sectors, to make car fenders, moisturizer, paint, and even bandages for the medical sector. This shows how interested manufacturers are in these biomaterials.

Speaking of applications, you helped create a start-up that uses cellulose

JB: Between 2009 and 2012, we participated in the European project Sunpap. The goal was to scale-up cellulose nanoparticles.  The thesis conducted as part of this project led us to file 2 patents for cellulose powders and functionalized nanocellulose. We then embarked on an adventure to create a start-up called Inofib. As one of the first companies in this field, the start-up significantly contributed to the industrial development of these biomaterials. Today, the company is focused on developing specific functionalization and applications for cellulose nanofibers. It is not seeking to compete with other major players in this field, who have since begun working on nanocellulose with European support, rather it seeks to differentiate itself through its expertise and the new functions it offers.

Can nanocellulose be used to design smart materials?  

JB: When I began my research, I was working separately on smart materials and nanocellulose. In particular, I worked with a manufacturer to develop conductive and transparent inks for high-quality materials, which led to the creation of another start-up: Poly-Ink. As things continued to progress, I decided to combine the two areas I was working on. Since 2013, I have been working on designing nanocellulose-based inks, which make it possible to create flexible, transparent and conductive layers to replace, for example, layers that are on the screens of mobile devices.

In the coming years, what areas of nanocellulose will you be focusing on?

JB: I would like to continue in this area of expertise by further advancing the solutions so that they can be produced. One of my current goals is to design them using green engineering processes, which limit the use of toxic solvents and are compatible with an environmental approach. Then I would like to increase their functions so that they can be used in more fields and with improved performance. I really want to show the interest of developing nanocellulose. I need to keep an open mind, so I can find new applications.

 

[divider style=”normal” top=”20″ bottom=”20″]

Biography of Julien Bras

Julien Bras, 39, has been an associate research professor at Grenoble INP- Pagora since 2006, as well as deputy director of LGP2 (Paper Process Engineering Lab). He was previously an engineer in a company in the paper industry in France, Italy and Finland. For over 15 years, Julien Bras has been focusing his research on developing a new generation of high-performance cellulosic biomaterials and developing the use of these agro-resources.

The industrial aspect of his research is not restricted to his collaborations as it also extends to the 9 registered patents and in particular, the founding of two spin-offs to which Julien Bras contributed. One is specialized in producing conductive and transparent inks for the electronics industry (Poly-Ink), and the other is specialized in producing nanocellulose for the paper, composite and chemical industries (Inofib).

[divider style=”normal” top=”20″ bottom=”20″]

Grand Prix IMT-Académie des sciences

Pierre Rouchon: research in control

Pierre Rouchon, a researcher at Mines ParisTech, is interested in the control of systems. Whether it be electromechanical systems, industrial facilities or quantum particles, he works to observe their behavior and optimize their performance. In the course of his work, he had the opportunity to work with the research group led by Serge Haroche, winner of the 2012 Nobel Prize in Physics. On November 21st, Pierre Rouchon was awarded the Grand Prix IMT-Académie des Sciences at an official ceremony held in the Cupola of the Institut de France.

 

From the beginning, you have focused your research on control theory. What is it?

Pierre Rouchon: My specialty is automation: how to optimize the control of dynamic systems. The easiest way to explain this is through an example. I worked on a problem that is well known in mobile robotics: how to parallel park a car hauling several trailers. If you have ever driven a car with a trailer in reverse, you know that you intuitively take the trajectory of the back of the trailer as the reference point. This is what we call a “flat output”; together, the car and trailer form a “flat system” for which simple algorithms exist for planning and tracking the trajectories. For this type of example, my research showed the value of controlling the trajectory of the last trailer, and developing efficient feedback algorithms based on that trajectory. This requires modelling — or, as we used to say, expression through equations — for the system and its movements.

What does this type of research achieve?

PR: It reduces the calculations that need to be made. A crane is another example of a flat system. By taking the trajectory of the load carried by the crane as the flat output, rather than the crane’s arm or hoisting winch, much fewer calculations are required. This leads to the development of more efficient software that assists operators in steering the crane, which speeds up their handling operations.

This seems very different from your current work in physics!

PR: What I’m interested in is the concept of feedback. When you measure and observe a classical system, you do so without disturbing it. You can therefore make a correction in real time using a feedback loop: this is the practical value of feedback, which makes systems easier to run and resistant to the disturbances they face. But in quantum systems, you disturb the system just by measuring it, and you therefore have an initial feedback due to the measurement. Moreover, the controller itself can be another quantum system. In quantum systems, the concept of feedback is therefore much more complex. I began studying this with one of my former students, Mazyar Mirrahimi, in the early 2000s. In fact, in 2017 he received the Prix Inria-Académie des Sciences Young Researcher Prize, and we still work together.

What feedback issue did you both work on in the beginning?

PR: When we started, we were taking Serge Haroche’s classes at the Collège de France. In 2008, we started working with his team on the experiment he was conducting. He was trying to manipulate and control photons that were trapped between two mirrors. He had developed very subtle “non-destructive” measures for counting the photons without destroying them. He earned a Nobel Prize in 2012 for his work. Along with Nina Amini, who was working on her thesis under our joint supervision, Mazyar and I first worked on the feedback loop that in 2011 made it possible to stabilize the number of photons around a setpoint, a whole number of several units.

Are you still interested in quantum feedback today?

PR: Yes, we are seeking to develop mathematical systematic methods for designing feedback loops with a hybrid structure: the first part of the controller is conventional, whereas the second part is a quantum system. To design these methods, we rely on superconducting quantum circuits. These are electronic circuits with quantum behavior at a low temperature, which are currently the object of much study. They are controlled and measured by radio frequency waves in the gigahertz range, which propagate along coaxial cables. We are currently working with experimenters to develop a quantum logic bit (logical qubit), which is one of the basic components of the famous quantum computer that everyone is working towards!

Is it important for you to have practical and experimental applications for your research?

PR:  Yes. It is important for me to have direct access to the problem I’m studying, to the objective reality shared by the largest possible audience. Working on concrete issues, with a real experiment or a real industrial process enables me to go beyond simulations and better understand the underlying mathematical methods. But it is a daunting task: in general, nothing goes according to plan. When I was working on my thesis, I worked with an oil refinery on controlling the quality of distillation columns. I arrived at the site with a floppy disk containing a Fortran code for a control algorithm tested through laboratory simulations. The engineers and operators on-site said, “Ok, let’s try it, at worst we’ll pour into the cavern”. The cavern was used to store the non-compliant portion of the product, to be reprocessed later. At first, the tests didn’t work, and it was awful and devastating for a theoretician. But when the feedback algorithm finally started working, what a joy and relief!

 

[divider style=”normal” top=”20″ bottom=”20″]

Biography of Pierre Rouchon

Pierre Rouchon

Pierre Rouchon, 57, is a professor at Mines ParisTech, and the director of the school’s Mathematics and Systems Department. He is a recognized specialist in Control Theory. He has made major scientific contributions to the three major themes of this discipline: flat systems in connection with trajectory planning, quantum systems and invariant asymptomatic observers.

His work has had, and continues to have, a significant impact on a fundamental level. It has been presented in 168 publications, cited 12,000 times, and been the subject of 9 patents. His work has been further reinforced by industrial collaborations, through which concrete and original solutions have been created. Examples include Schneider Electric’s order for electric engines, developing cryogenic distillation of air for Air Liquide and regulating diesel engines to reduce fine particle emissions with IFP and PSA.

[divider style=”normal” top=”20″ bottom=”20″]

 

optical communications

Sébastien Bigo: setting high-speed records

Driven by his desire to take the performance of fiber optics to the next level, Sébastien Bigo has revolutionized the world of telecommunications. His work carried out at Nokia Bell Labs has now set nearly 30 world records for the bandwidth and distance of optical communications. Some examples: the first communication transmitted at a rate of 10 terabits per second. The coherent optical networks he helped develop are now used every day for transmitting digital data. On November 21st he received the Grand Prix IMT-Académie des sciences for the entirety of his work at the official awards ceremony held at the Cupola of the Institut de France.

 

How did you start working on optical communications?

Sébastien Bigo: Somewhat by mistake. When I was in preparation class, I was interested in electronics. On the day of my entrance exams, I forgot to hand in an extra page where I had written a few calculations. When I received my results, I was one point away from making the cutoff for admission to the electronics school I wanted to attend — and would have been able to attend if I had handed in that paper. However, my exam results allowed me to attend the graduate school SupOptique, which recruits students using the same entrance exam, based on a slightly different scale. It’s funny actually: if I had handed in that paper, I would be working on electronics!

But were you at least interested in optics?

SB: I had a fairly negative image of optical telecommunications. At the time, the work of optics engineers consisted in simply finding the right lens for injecting light into a fiber. I didn’t think that was very exciting… When I contacted Alcatel in search of a thesis topic, I asked them if they had anything more advanced. I was interested in optical signal processing: what light can do to itself. They just happened to have a topic on this subject.

And from there, how did you begin your work in telecommunications?

SB: Through my work in optical signal processing, I came to work on pulses that are propagated without changing their shape: solitons. Thanks to these waves, I was able to make the first completely optical regeneration of a signal, which allows an optical signal to be sent further without converting it into an electrical signal. This enabled me to create the first demonstration of a completely optical transatlantic communication. Later, solitons were replaced by WDM technologies — multicolored pulses produced by a different laser beam for each color — which produce much better rates. This is when I truly got started in the telecommunications profession, and I started setting a series of 29 world records for transmission rates.

What do these records mean for you?  

SB: The competition to find the best rates is a global one. It’s always gratifying when we succeed before the others. This is what makes the game so interesting for me: knowing that I’m competing against people who are always trying to make things better by reinventing the rules every time. And winning the game has even greater merit since I don’t win every time. Pursuing records then leads to innovations. In the early 2000s, we developed the TeraLight fiber, which was a huge industrial success. This enabled us to continue to set remarkable records later.

Are some records more important than others?

SB: The first one was, when I succeeded in making the first transmission over a transatlantic distance at a rate of 20 gigabits per second, using optical periodic regeneration. Then there are records that are symbolic. Like when I successfully reached a rate of 10 terabits per second. No one had done this before, despite the fact that we had given the secret away shortly before, when we reached 5 terabits per second. And that time we finished our measurements at 7am on the first day of the conference where we would announce the record. I almost missed my flight because of it. The competition is so intense that we submit the results at the very last minute.

Is this quest for increasingly higher rates what led you to develop coherent optical networks?   

SB: I began working on coherent optical networks in 2006, when we realized that we had reached the limit of what we knew how to do. The records had allowed us to independently fine-tune elements that no one had put together before. By adapting our previous findings to modulation, receivers, signal processing, propagation and polarization, we created an optical system that is truly a cut above the rest, and it has become the new industry standard. This led to a new record being set, with the product of the speed and the propagation distance reaching over 100 petabits per second per kilometer [1 petabit = 1,000 terabits]. To achieve this, we transmitted 15.5 terabits per second over a distance of 7,200 kilometers. This is above all a perfect example of what a system is: a combination of elements that together are worth much more than the sum of each one separately.

What is your current outlook for the future?

SB: Today I am working on optical networks, which in a way are systems of systems. For a European network, I am focusing on what path to take in order for data transport to be as inexpensive and efficient as possible. I am convinced that this is the area in which major changes will occur in coming years. It is becoming difficult to increase the fibers’ capacity as we approach the Shannon limit. Therefore, to continue transmitting information, we need to think about how we can optimize the filling of communication channels. The goal is to transform the networks to introduce intelligence and make life easier for operators.

 

[divider style=”normal” top=”20″ bottom=”20″]

Biography of Sébastien Bigo

Sébastien Bigo, optical communications

Sébastien Bigo, 47, director of the IP and Optical Networks research group at Nokia Bell Labs, belongs to the great French school of optics applied to telecommunications. Through his numerous innovations, he has been and continues to be a global pioneer in high-speed fiber optic transmission.

The topics he has studied have been presented in 300 journal publications and at conferences. He has also filed 42 patents representing an impressive number of contributions to different aspects of the scientific field that he has had such a profound impact on. These multiple results have been cited over 8,000 times and have enabled 29 experimental demonstrations to take place, together constituting a world record in terms of bandwidth or transmission distance.

Some of the resultant innovations have generated significant economic activity. Particular examples include Teralight Fiber, that Sébastien Bigo helped develop, which was rolled out over several million kilometers, and coherent networks, which are now used by billions every week. These are certainly two of France’s most resounding successes in communication technology.

[divider style=”normal” top=”20″ bottom=”20″]

 

 

 

IMT Académie des Sciences awards

And the winners of the new IMT-Académie des Sciences Awards are…

At the start of 2017, IMT and the French Académie des Sciences created the Grand Prix Award and the Young Scientist Prize (with support from the Fondation Mines-Télécom) to reward exceptional European scientific contributions in the fields of digital technology, energy and the environment. These Prizes were awarded on Tuesday, November 21st at the official awards ceremony held in the Cupola of the Institut de France. We had the opportunity to interview the winners at the event.

 

Prix IMT Académie des sciences

Philippe Jamet, President of IMT; Pierre Rouchon and Sébastien Bigo, winners of the Grand Prix

 

On March 29th we announced the creation of an IMT-Académie des Sciences Prize in the following fields:

  • Sciences and technologies of the digital transformation in industry
  • Sciences and technologies of the energy transition
  • Environmental engineering

The Grand Prix (€30,000) honors a scientist who has made an outstanding contribution to one of these fields through a particularly remarkable body of work, while the Young Scientist Prize (€15,000) is in recognition of a scientist under 40 who has contributed to one these fields through a major innovation.

Last June the jury assessed and made a selection from among the 20 applications that were submitted, all of them very high-level. 13 submissions were in the running for the Grand Prix and 7 for the Young Scientist Award. The three 2017 winners best reflect the intentions that inspired the creation of these awards.

 

The “Grand Prix IMT-Académie des Sciences” was awarded to two winners in optics and mathematics

For this first edition, the jury selected two candidates for the Grand Prix IMT-Académie des Sciences: Sébastien Bigo of Nokia Bell Labs, and Pierre Rouchon of Mines ParisTech

– Sébastien Bigo, 47, director of the IP and Optical Networks research group at Nokia Bell Labs, belongs to the great French school of optics applied to telecommunications. Through his numerous innovations, he has been and continues to be a global pioneer in high-speed fiber optic transmission…

Read the interview with Sébastien Bigo on I’MTech

– Pierre Rouchon, 57, is a professor at Mines ParisTech and the director of the Mathematics and Systems research unit at the same school. He is a recognized specialist in Control Theory. He has made major scientific contributions to the three themes of this discipline: signage systems in connection with trajectory planning, quantum systems and invariant asymptomatic observers…

Read the interview with Pierre Rouchon on I’MTech

 

The “IMT-Académie des Sciences Young Scientist Prize” awarded in the field of cellulosic biomaterials

– Julien Bras, 39, has been a lecturer and research supervisor at Grenoble INP – Pagora since 2006, as well as being the deputy director of the LGP2 Paper Process Engineering Laboratory, after having begun his professional career as an engineer in a company in the paper industry in Italy and Finland. For over 15 years Julien Bras has been focusing his research on developing new, highly innovative engineering procedures, with the aim of creating a new generation of high-performance cellulosic biomaterials and developing the use of these agro-resources…

Read the interview with Julien Bras on I’MTech

 

 

Invenis

Invenis: machine learning for non-expert data users

Invenis went into incubation at Station F at the beginning of July, and has since been developing at full throttle. This start-up has managed to make a name for itself in the highly competitive sector of decision support solutions using data analysis. Its strength? Providing easy-to-use software aimed at non-expert users, which processes data using efficient machine learning algorithms.

 

In 2015, Pascal Chevrot and Benjamin Quétier, both in the Ministry of Defense at the time, made an observation that made them want to launch a business. They considered that the majority of businesses were using outdated digital decision support tools that were increasingly ill-suited to their needs. “On the one hand, traditional software was struggling to adapt to big data processing and artificial intelligence”, Pascal Chevrot explains. “On the other hand, there were expert tools that existed but were inaccessible to anyone that didn’t have significant technical knowledge.” Faced with this situation, the two colleagues founded Invenis in November 2015 and joined the ParisTech Entrepreneurs incubator. On July 3, 2017, less than two years later, they joined Station F, one of the biggest start-up campuses in the world located in the 13th arrondissement of Paris.

The start-up is certainly appealing: it aims to rectify the lack of available decision support tools with SaaS software (Software as a Service). Its goal is to make the value provided by data available to people that manipulate them every day in order to obtain information, but who are by no means experts. Invenis therefore targets professionals that know how to extract data and use it to obtain information, but who find themselves limited by the capabilities of the tools that they use when they want to go further. Through their solution, Invenis allows these professionals to carry out data processing using machine learning algorithms, simply.

Pascal Chevrot illustrates how simple it is to use, with an example. He takes two data sets and uploads them to Invenis: one is the number of sports facilities per activity and per department, and the other is the population by city in France. The user can then choose what kind of data processing they wish to perform from a library of modules. For example, they could first decide to group the different kinds of sports facilities (football stadiums, boules pitches, swimming pools, etc.) according to regions in France. In parallel, the software will then aggregate the number of inhabitants per commune in order to provide a population value on a regional scale. Once each of these actions has been completed, the user can then carry out an automated segmentation, or “clustering”, in order to classify regions into different groups according to the density of sports facilities in that particular region. In a few clicks, Invenis thus allows users to visualize the regions that have the highest number of sports facilities and those with a low number in relation to the population size, and which should therefore be invested in. Each process carried out on the data is done simply by dragging a processing module into the interface associated with the desired procedure and using this to create a full data processing session.

The user-friendly nature of the Invenis software lies in how simple it is to use these processing modules. Every action has been designed to be simple for the user to understand. The algorithms come from open source libraries Hadoop and Spark, which are references in the sector. “We then add our own algorithms to these existing algorithms, making them easier to manage”, highlights Pascal Chevrot.

For example, the clustering algorithm they use ordinarily requires a certain number of factors to be defined. Invenis’ processing module automatically calculates these factors using its proprietary algorithms. It does, however, allow expert users to modify these if necessary.

In addition to how simple it is to use, the Invenis program has other advantages, namely a close management of data access rights. “Few tools do this”, affirms Pascal Chevrot, before demonstrating the advantages of this function: “For some businesses, such as telecommunication operators, it’s important because they have to report to the CNIL (National Commission for Data Protection and Liberties) for the confidentiality of their data, and soon this will also be the case in Europe, with the arrival of GDPR. Not forgetting that more established businesses have implemented data governance over these questions.”

 

Revealing the value of data

Another advantage of Invenis is that it offers different frameworks. The start-up offers free trial periods to any data users who are using tools they are not satisfied with, along with the opportunity to talk to the technical management team who can demonstrate the tool’s capabilities and even develop proof of concept. However, the start-up also has a support and advice service for businesses that have identified a problem that they would like to solve using their data. “We offer clients guaranteed results, assisting them to resolve their problem with the intention of ultimately making them independent”, explains the co-founder.

It was within this second format that Invenis realized its most iconic proof of concept with CityTaps, another start-up from ParisTech Entrepreneurs that offers prepaid water meters. Using the Invenis software allowed CityTaps to look at three questions. Firstly, how do users consume water in terms of days of the week, size of household, season, etc.? Secondly, what is the optimal moment to warn a user that they need to top up their meter, and would they be quick to do this after receiving an alert SMS? And finally, how can we best predict temperature changes in the meters due to the weather? Invenis provided many responses to these questions by using their processing solutions on CityTaps’ data.

The case of CityTaps shows to what extent data management tools are crucial for companies. Machine learning and intelligent data processing are essential in generating value. However, these technologies can sometimes be difficult to access due to insufficient technical knowledge. Enabling businesses to access this value by reducing access costs in terms of skills is Invenis’ number one aim. As Pascal Chevrot concludes, the key is to provide “”.

algorithms

Ethics, an overlooked aspect of algorithms?

We now encounter algorithms at every moment of the day. But this exposure can be dangerous. It has been proven to influence our political opinions, moods and choices. Far from being neutral, algorithms carry their developers’ value judgments, which are imposed on us without our noticing most of the time. It is now necessary to raise questions about the ethical aspects of algorithms and find solutions for the biases they impose on their users.

 

[dropcap]W[/dropcap]hat exactly does Facebook do? Or Twitter? More generally, what do social media sites do? The overly-simplified but accurate answer is: they select the information which will be displayed on your wall in order to make you spend as much time as possible on the site. Behind this time-consuming “news feed” hides a selection of content, advertising or otherwise, optimized for each user through a great reliance on algorithms. Social networks use these algorithms to determine what will interest you the most. Without questioning the usefulness of these sites — this is most likely how you were directed to this article — the way in which they function does raise some serious ethical questions. To start with, are all users aware of algorithms’ influence on their perception of current events and on their opinions? And to take a step further, what impacts do algorithms have on our lives and decisions?

For Christine Balagué, a researcher at Télécom École de Management and member of CERNA (see text box at the end of the article), “personal data capturing is a well-known topic, but there is less awareness about the processing of this data by algorithms.” Although users are now more careful about what they share on social media, they have not necessarily considered how the service they use actually works. And this lack of awareness is not limited to Facebook or Twitter. Algorithms now permeate our lives and are used in all of the mobile applications and web services we use. All day long, from morning to night, we are confronted with choices, suggestions and information processed by algorithms: Netflix, Citymapper, Waze, Google, Uber, TripAdvisor, AirBnb, etc.

Are your trips determined by Citymapper? Or by Waze? Our mobility is increasingly dependent on algorithms. Illustration: Diane Rottner for IMTech

 

They control our lives,” says Christine Balagué. “A growing number of articles published by researchers in various fields have underscored the power algorithms have over individuals.” In 2015, Robert Epstein, a researcher at the American Institute for Behavioral Research, demonstrated how a search engine could influence election results. His study, carried out with over 4,000 individuals, demonstrated that candidates’ rankings in search results influenced at least 20 % of undecided voters. In another striking example, a study carried out by Facebook in 2012 on 700,000 of its users showed that people who had previously been exposed to negative posts posted predominantly negative content. Meanwhile, those who had previously been exposed to positive posts posted essentially positive content. This proves that algorithms are likely to manipulate individuals’ emotions without their realizing or being informed of it. What role do our personal preferences play in a system of algorithms of which we are not even aware?

 

The opaque side of algorithms

One of the main ethical problems with algorithms stems from this lack of transparency. Two users who carry out the same query on a search engine such as Google will not have the same results. The explanation provided by the service is that responses are personalized to best meet the needs of each of these individuals. But the mechanisms for selecting results are opaque. Among the parameters taken into account to determine which sites will be displayed on the page, over a hundred have to do with the user performing the query. Under the guise of trade secret, the exact nature of these personal parameters and how they are taken into account by Google’s algorithms is unknown. It is therefore difficult to know how the company categorizes us, determines our areas of interest and predicts our behavior. And once this categorization has been carried out, is it even possible to escape it? How can we maintain control over the perception that the algorithm has created about us?

This lack of transparency prevents us from understanding possible biases which could result from data processing. Nevertheless, these biases do exist and protecting ourselves from them is a major issue for society. A study by Grazia Cecere, an economist at Télécom École de Management, provides an example of how individuals are not treated equally by algorithms. Her work has highlighted discrimination between men and women in a major social network’s algorithms for associating interests. “In creating an ad for STEMs (sciences, technology, education, mathematics), we noticed that the software demonstrated a preference for distributing it to men, even though women show more interest for this subject,” explains Grazia Cecere. Far from the myth of malicious artificial intelligence, this sort of bias is rooted in human actions. We must not forget that behind each line of code, there is a developer.

Algorithms are used first and foremost to propose services, which are most often commercial in nature. They are thus part of a company’s strategy and reflect this strategy in order to respond to its economic demands. “Data scientists working on a project seek to optimize their algorithms without necessarily thinking about the ethical issues involved in the choices made by these programs,” points out Christine Balagué. In addition, humans have perceptions about the society to which they belong and integrate these perceptions, either consciously or unconsciously, in the software they develop. Indeed, value judgements present in algorithms quite often reflect the value judgments of their creators. In the example of Grazia Cecere’s work, this provides a simple explanation for the bias discovered, “An algorithm learns what it is asked to learn and replicates stereotypes if they are not removed.”

algorithms

What biases are hiding in the digital tools we use every day? What value judgments passed down from algorithm developers do we encounter on a daily basis? Illustration: Diane Rottner for IMTech.

 

A perfect example of this phenomenon involves medical imaging. An algorithm used to classify a cell as sick or healthy must be configured to make a comparative assessment of the number of false positives and false negatives. Developers must therefore decide to what extent it is tolerable for healthy individuals to receive positive tests in order to prevent sick individuals from receiving negative tests. For doctors, it is preferable to have false positives rather than false negatives while scientists who develop algorithms prefer false negatives to false positives, as scientific knowledge is cumulative. Depending on their own values, developers will privilege one of these professions.

 

Transparency? Of course, but that’s not all!

One proposal for combating these biases is to make algorithms more transparent. Since October 2016, the law for a digital republic, proposed by Axelle Lemaire, the former Secretary of State for Digital Affairs, requires transparency for all public algorithms. This law was responsible for making the higher education admission website (APB) code available to the public. Companies are also increasing their efforts for transparency. As of May 17, 2017, Twitter has allowed its users to see the areas of interest the site associates with them. But despite these good intentions, the level of transparency is far from sufficient for ensuring the ethical dimension. First of all, code understandability is often overlooked: algorithms are sometimes delivered in formats which make them difficult to read and understand, even for professionals. Furthermore, transparency can be artificial. In the case of Twitter, “no information is provided about how user interests are attributed,” observes Christine Balagué.

[Interests from Twitter
These are some of the interests matched to you based on your profile and activity.
You can adjust them if something doesn’t look right.]

Which of this user’s posts led to his being classified under “Action and Adventure,” a very broad category? How are “Scientific news” and “Business and finance” weighed in order to display content in the user’s Twitter feed?

 

To take a step further, the degree to which algorithms are transparent must be assessed. This is the aim of the TransAlgo project, another initiative launched by Axelle Lemaire and run by Inria. “It’s a platform for measuring transparency by looking at what data is used, what data is produced and how open the code is,” explains Christine Balagué, a member of TransAlgo’s scientific council. The platform is the first of its kind in Europe, making France a leading nation in transparency issues. Similarly, DataIA, a convergence institute for data science established on Plateau de Saclay for a period of ten years, is a one-of-a-kind interdisciplinary project involving research on algorithms in artificial intelligence, their transparency and ethical issues.

The project brings together multidisciplinary scientific teams in order to study the mechanisms used to develop algorithms. The humanities can contribute significantly to the analysis of the values and decisions hiding behind the development of codes. “It is now increasingly necessary to deconstruct the methods used to create algorithms, carry out reverse engineering, measure the potential biases and discriminations and make them more transparent,” explains Christine Balagué. “On a broader level, ethnographic research must be carried out on the developers by delving deeper into their intentions and studying the socio-technological aspects of developing algorithms.” As our lives increasingly revolve around digital services, it is crucial to identify the risks they pose for users.

Further reading Artificial Intelligence: the complex question of ethics

[box type=”info” align=”” class=”” width=””]

A public commission dedicated to digital ethics

Since 2009, the Allistene association (Alliance of digital sciences and technologies) has brought together France’s leading players in digital technology research and innovation. In 2012, this alliance decided to create a commission to study ethics in digital sciences and technologies: CERNA. On the basis of multidisciplinary studies combining expertise and contributions from all digital players, both nationally and worldwide, CERNA raises questions about the ethical aspects of digital technology. In studying such wide-ranging topics as the environment, healthcare, robotics and nanotechnologies, it strives to increase technology developers’ awareness and understanding of ethical issues.[/box]

 

 

 

human resources

How is technology changing the management of human resources in companies?

In business, digital technology is revolutionizing more than just production and design. The human resources sector is also being greatly affected; whether that be through better talent spotting, optimized recruitment processes or getting employees more involved with the company’s vision. This is illustrated though two start-ups incubated at ParisTech Entrepreneurs, KinTribe and Brainlinks.

 

Recruiters in large companies can sometimes store tens of thousands of profiles in their databases. However, it often difficult to make use of such a substantial pool of information using conventional methods. “It’s impossible to keep such a large file up-to-date, so the data often become obsolete very quickly”, states Chloé Desault, a former in-company recruiter and co-founder of the start-up KinTribe. “Along with Louis Laroche, my co-founder who was also formerly a recruiter, we aim to facilitate the use of these talent pools and improve the daily lives of recruiters”, she adds. The software solution enables recruitment professionals to create a recruitment pool using professional social networks. With KinTribe, they are able to create a usable database, in which they can perform complex searches in order to find the best person to contact for their need, from tens of thousands of available profiles. “This means they no longer have to waste time on people that do not correspond to the target in question”, affirms the co-founder.

The software’s algorithms can then process the collected data to produce a rating for the relevant market. This rating indicates to what extent a person is susceptible to an external recruitment offer. “70% of people on LinkedIn aren’t actively looking for a job, but would still consider a job offer if it was presented to them”, Louis Laroche explains. In order to identify these people, and to what extent they are likely to be interested, the algorithm is based on key values identified by recruiters. Age, field of work and duration of last employment are all factors that can influence how open someone is to a proposition.

One of the start-up’s next goals is to add new sources of data into the mix, allowing their users to go on other networks to find new talents. Multiplying the available data will also allow them to improve the market rating algorithms. “We want to provide recruiters with the best possible knowledge by aggregating the maximum amount of social data that we can”; summarizes the KinTribe co-founder.

Finally, the two entrepreneurs are also interested in other topics within in the field of recruitment. “As a start-up, we have to try to stay ahead of the curve and understand what the market will do next. We dedicated part of our summer to exploring the potential of a new co-optation product” Chloé concludes.

 

From recruitment to employee involvement

In human resources, software tools represent more than just an opportunity for recruiting new talent. One of their aims is also to get employees involved in the company’s vision, and to listen to them in order to pinpoint their expectations. The start-up Brainlinks was created for this very reason. Today, it offers a mobile app called Toguna for businesses with over 150 people.

The concept is simple: with Toguna, general management or human resources departments can ask employees a question such as: “What is your view of the company of the future?” or “What new things would you like to see in the office?” The employees, who remain anonymous on the app, can then select the questions they are interested in and offer responses that will be made public. If a response made by a colleague is interesting, other employees can vote for it, thus creating a collective form of involvement based on questions about life at work.

In order to make Toguna appeal to the maximum number of people, Brainlinks has opted for a smart, professional design: “Contributions are formatted by the person writing them; they can add an image and choose the font, etc.”, explains Marc-Antoine Garrigue, the start-up’s co-founder. “There is an element of fun that allows each person to make their contributions their own”, he continues. According to Marc-Antoine Garrigue, this feature has helped them reach an average employee participation rate of 85%.

Once the votes have been cast and the propositions collected, managers can analyze the responses. When a response is chosen, it is highlighted on the app, providing transparency in the inclusion of employee ideas. A field of development for the app is to continue improving the dialogue between managers and employees. “We hope to go even further in constructing a collective history: employees make contributions on a daily basis and in return, the management can explain the decisions they have made after having consulted these”, outlines the co-founder. This is an opportunity that could really help businesses to see digital transformation as a vehicle for creativity and collective involvement.

 

Fine particles are dangerous, and not just during pollution peaks

Véronique Riffault, IMT Lille Douai – Institut Mines-Télécom and François Mathé, IMT Lille Douai – Institut Mines-Télécom

[divider style=”normal” top=”20″ bottom=”20″]

[dropcap]T[/dropcap]he French Agency for Food, Environmental and Occupational Health and Safety (ANSES) released a new notice concerning air pollution yesterday. After having been questioned on the potential changes to norms for ambient air quality, particularly concerning fine particles (PM10 and PM2.5), the organization has highlighted the importance of pursuing work on implementing long-term public policies that promote the improvement of air quality. They recommend lowering the annual threshold value for PM2.5 to equal the recommendations made by the WHO, and introducing a daily threshold value for this pollutant. As the following data visualization shows, the problem extends throughout Europe.

Average concentrations of particulate matters whose aerodynamic diameter is below 2.5 micrometers (called “PM2.5” which makes up “fine particles” along with PM10) for the year 2012. Amounts calculated using measures from fixed air quality monitoring stations, shown in micrograms per m3 of air. Data source: AirBase.

The level reached in peak periods is indicated by hovering the mouse over a given circle, the sizes of which will vary depending on the amount. The annual average is also provided, detailing long-term exposure and the subsequent proven impact on health (particularly on the respiratory and cardio-vascular systems). It should be noted that the annual target value for the particles (PM2.5), as specified by European legislation is currently 25 µg/m3. The level will drop to 20 µg/m3 in 2020, whilst the WHO currently recommends an annual threshold of 10 µg/m3.

The data shown on this map correspond exclusively to a so-called “fundamental” site typology, examining not only urban environments but also rural ones, which are far from being influenced by nearby pollution (coming from traffic or industry). Airbase also collects data supplied by member states that use measuring methods that can vary depending on the site but always respect the data quality objectives and are specific to the pollutant (90% of data on PM2.5 is approved annually, with an uncertainty of ± 25%). This perhaps explains why certain regions show little or no data (Belarus, Ukraine, Bosnia-Herzegovina and Greece), keeping in mind that a single station cannot be representative of the air quality across an entire country (as is the case in Macedonia).

The PM2.5 shown here may be emitted directly into the atmosphere (these are primary particles) or formed by chemical reactions between gaseous pollutants in the atmosphere (secondary particles). The secondary formation of PM2.5 often stems from peaks in pollution at certain points in the year when the sources of the pollutants are most significant and in meteorological conditions which allow them to accumulate. Sources connected to human activity are mainly linked to combustion processes (such as engines in vehicles or the burning of biomass and coal for residential heating systems) and agricultural activity.

The above map shows that the threshold suggested by the WHO has been surpassed in a large majority of stations, particularly in Central Europe (Slovakia, South Poland) due to central heating methods, or in Northern Italy (the Po Valley), which has been affected by poor topographical and meteorological conditions.

Currently, only 1.16% of stations are recording measurements that are still within the WHO recommendations for PM2.5 (shown in light green on the map). On top of this, 13.6% of stations have already reached the future European limits to be set in 2020 (shown in green and orange circles).

This illustrates that a large section of the European population is being exposed to concentrations of particles that are harmful to health and that some significant efforts are to be made. In addition, when considering that the mass concentration of particulates is a good indicator of air quality, their chemical composition should not be forgotten. This is something which proves to be a challenge for health specialists and policymakers, especially in real time.

Véronique Riffault, Professor in Atmospheric Sciences, IMT Lille Douai – Institut Mines-Télécom and François Mathé, Professor-Researcher, President of the AFNOR X43D Normalization Commission “Ambient Atmospheres”, Head of Studies at LCSQA (Laboratoire Central de Surveillance de la Qualité de l’Air), IMT Lille Douai – Institut Mines-Télécom

 

The original version  of this article was published in French in The Conversation France.

 

 

auonomous cars, Guillaume Duc, chair C3S, connected cars

No autonomous cars without cybersecurity

Protecting cars from cyber-attacks is an increasingly important concern in developing smart vehicles. As these vehicles become more complex, the number of potential hacks and constraints on protection algorithms is growing. Following the example of the “Connected cars and cybersecurity” chair launched by Télécom ParisTech on October 5, research is being carried out to address this problem. Scientists intend to take on these challenges, which are crucial to the development of autonomous cars.

 

Connected cars already exist. From smartphones connected to the dashboard, to computer-aided maintenance operations, cars are packed with communicating embedded systems. And yet, they still seem to be a long way from the futuristic vehicles we’ve been dreaming up in our imagination. They do not (yet) all communicate with one another or with road infrastructures to provide warnings about dangerous situations for example. Cars are struggling to make the leap from “connected” to “intelligent”. And without intelligence, they will never become autonomous. Guillaume Duc, a research professor in electronics at Télécom ParisTech who specializes in embedded systems, perfectly sums up one of the hurdles to this development, “Autonomous cars will not exist until we are able to guarantee that cyber-attacks will not put a smart vehicle, its passengers or its environment in danger.”

Cybersecurity for connected cars is indeed crucial to their development. Whether rightly or wrongly, no authority will authorize the sale of increasingly intelligent vehicles without first guaranteeing that they will not be out of control on the roads. The topic is of such importance in the industry that researchers and manufacturers have teamed up to find solutions. A “Connected Cars and Cybersecurity” chair bringing together Télécom ParisTech, Fondation Mines-Télécom, Renault, Thalès, Nokia, Valéo and Wavestone was launched on October 5. According to Guillaume Duc, the specific features of connected cars make this is a unique research topic.

The security objectives are obviously the same as in many other systems,” he says, pointing to the problems of information confidentiality or certifying that information has really been sent by one sensor instead of another. “But cars have a growing number of components, sensors, actuators and communication interfaces, making them easier to hack,” he goes on to say. The more devices there are in a car, the more communication points it has with the outside world. And it is precisely these entry points which are the most vulnerable. However, these are not necessarily the instruments that first come to mind, like radio terminals or 4G.

Certain tire pressure sensors use wireless communication to display a possible flat tire on the dashboard. But wireless communication means that without an authentication system to ensure that the received information has truly been sent by this sensor, anyone can pretend to be this sensor from outside the car. And if you think sending incorrect information about tire pressure seems insignificant, think again. “If the central computer expects a value of between 0 and 10 from the sensor and you send it a negative number, for example, you have no idea how it will react,” explains the researcher. This could crash the computer, potentially leading to more serious problems for the car’s controls.

 

Adapting cybersecurity mechanisms for cars

The stakes are high for research on how to protect each of these communicating elements. These components have only limited computing power while algorithms to protect against attacks usually require high computing power. “One of the chair’s aims is to successfully adapt algorithms to guarantee security while requiring less computer resources,” says Guillaume Duc. This challenge goes hand in hand with another one, limiting latency in the car’s execution of critical decisions. Adding algorithms to embedded systems leads to increased computing time when an action is transmitted. But cars cannot afford to take longer to brake. Researchers therefore have their work cut out for them.

In order to address these challenges, they are looking to the avionics sector, which has been facing problems associated with the proliferation of sensors for years. But unlike planes, fleets of cars are not operated in an ultra-controlled environment. And in contrast to aircraft pilots, drivers are masters of their own cars and may handle them as they like. Cars are also serviced less regularly. It is therefore crucial to guarantee that cybersecurity tools installed in cars cannot be altered by their owners’ tinkering.

And since absolute security does not exist and “algorithms may eventually be broken, whether due to unforeseen vulnerabilities or improved attack techniques,” as the researcher explains, these algorithms must also be agile, meaning that they can be adapted, upgraded and improved without automakers having to recall an entire series of cars.

 

Resilience when faced with an attack

But if absolute security does not exist, where does this leave the 100% security guarantee against attacks, which is the critical factor in developing autonomous cars? In reality, researchers do not seek to protect against all possible attacks on connected cars. Their goal rather to ensure that even if an attack is successful, it will not prevent the driver or the car itself from remaining safe. And of course, this must be possible without having to brake suddenly on the highway.

To reach these objectives, researchers are using their expertise to develop resilience in embedded systems. The problem recalls that of critical infrastructures, such as nuclear power plants, which cannot simply shut down when under attack. In the case of cars, a malicious intrusion in the system must first be detected when it occurs. To do so, the vehicle’s behavior is constantly compared to previously-recorded behaviors which are considered normal. If an action is suspicious, it is identified as such. In the event of a real attack, it is crucial to guarantee that the car’s main functions (steering, brakes etc.)  will be maintained and isolated from the rest of the system.

Ensuring a car’s resilience from its design phase, resilience by design, is also the most important condition for cars to continually become more autonomous. Automakers can provide valuable insight for researchers in this area, by contributing to discussions about a solution’s technical acceptability or weighing in on economic issues. While it is clear that autonomous cars cannot exist without security, it is equally clear that they will not be rolled out if the cost of security makes them too expensive to find a market.

[box type=”info” align=”” class=”” width=””]

Cars: personal data on wheels!

The smarter the car, the more personal data it contains. Determining how to protect this data will also be a major research area for the “Connected cars and cybersecurity” chair. To address this question it will work in collaboration with another Télécom ParisTech chair, dedicated to “Values and policies of personal information” which also brings together Télecom SudParis and Télécom École de Management. This collaboration will make it possible to explore the legal and social aspects of exchanging personal data between connected cars and their environment. [/box]