Cyberattacks, 25 termes, cybersécurité, Hervé Debar

24 words for understanding cybersecurity

Starting with Algorithm and ending with Virus, this list features terms like Phishing and Firewall… As the symposium entitled “Are we entering a new era of cybersecurity?” is getting underway at IMT, here are 24 words to help you understand the concepts, technologies and systems used to protect people, materials and organizations from cyberattacks. This glossary was compiled with the help of Hervé Debar, a researcher at Télécom SudParis, an expert in cybersecurity and co-organizer of the symposium.

 

Algorithm  A sequence of instructions intended to produce a result (output data) by means of a calculation applied to input data.

Critical infrastructures  Infrastructures for which a cyberattack could have very serious consequences for the services provided, even to the point of putting lives at risk.

Cryptography  The science of secrets. Cryptography proposes algorithms that can make data unreadable for those who do not have the secret. It also makes it possible to sign digital documents.

Cyberattack  A sequence of actions that lead to the violation of the security policy. This violation often takes the form of a computer system or network malfunction (inability to connect, a service that is no longer available, or data being encrypted using ransomware). A cyberattack can also be invisible, but lead to serious consequences, such as the theft of confidential information.

Cyber defense  A country’s means of attacking and defending its computer systems and networks.

Cyber range  A training platform for cyberattacks and defense.

Denial of Service Attack (see Distributed Denial of Service Attack)

Distributed Denial of Service Attack (DDoS Attack An attack aimed at overloading a service provider’s resources (often related to the network), making it inaccessible.

Electromagnetic injection An electromagnetic signal sent to disrupt the operation of an electronic component (processor, memory, chip card…).

Firewall A network component that filters incoming and outgoing traffic on a website.

Flaw A (software) flaw is a programming error made by the programmer that allows a hacker to run a program for a different use than what was intended. The most prevalent example is SQL injection, in which hackers use a web site’s interface to control databases they could not normally access.

Google Project Zero  A Google project aimed at finding new vulnerabilities in software.

Hacking Computer data theft.

Intrusion Unauthorized connection to a system.

Krack (Key Reinstallation Attacks) Attacks against the WPA2 protocol that allow an attacker to force the reuse of an encryption key. This allows the attacker to collect a large number of packets, and therefore decrypt the network traffic more easily, without knowing the key.

Malicious software (see Malware)

Malware  A program used for a purpose that is inconsistent with the user’s expectations and violates the security policy. Malware often uses vulnerabilities to enter a system.

National Vulnerability Database A project of the National Institute of Standards and Technology (NIST) that identifies and analyzes software flaws.

Phishing A social engineering technique, in which an attacker convinces a victim to act without understanding the consequences. The technique often relies on emails with fraudulent content (e.g. CEO fraud scams).

Ransomware Malicious software (malware) aimed at extorting money from a victim, often by encrypting the data on their computer’s hard disk and demanding payment in exchange for the decryption key (often these keys are useless, and purchasing them is therefore useless).

Resilience (by design) or cyber-resilience A system’s ability to function in the event of an attack, that is, provide a service to its users in any condition, albeit at a reduced level.

Security Information and Event Management  A platform for uploading and processing alerts that allows operators to monitor their systems’ security and react in the event of an attack.

Trojan Horse  A backdoor installed on a system without the users’ and administrators’ knowledge, which allows a hacker to regularly and easily connect to the system without being seen.

Virus  Malicious software capable of entering a system and spreading to infect other systems.

Image satellite Sentinel Bretagne

What are the applications for spatial data?

Several terabytes: this is the phenomenal amount of data produced by the Sentinel satellites each day! How can these data flows be used to develop concrete applications to be used by those who manage territories? This is what spatial application experts focused on at the AppSpace Forum, an event organized by the CNES, GIS BreTel, Booster Morespace and Institut InSpace from October 17 to 19, 2017.

 

Copernicus, a program run by the European Space Agency and the European Union, has launched the Sentinel satellites – 1A, 1B, 2A, 2B and 3A, each equipped with different sensors for taking a variety of measurements. The goal is to provide European users, and more specifically researchers, with comprehensive, free observational data of the entire Earth: oceans, land, vegetation, coastal areas, radiometry, temperature, altimetry, etc. But can spatial data be used to develop concrete applications?

The question of an application for data from the Copernicus program was at the forefront of the AppSpace event, co-organized by GIS Bretel. For the first time, this initiative brought together all actors in the region of Brittany, but also spatial application professionals from throughout France and Europe, to participate in round table discussions, themed workshops, and an exhibition space for companies and laboratories. The organizers of Appspace intend it to become a reference event taken up by other regions of France and Europe. The goal is to obtain a clear, overall vision of the regional, national, and even European ecosystem of spatial apps.

 

Encourage end users to take possession of data

“The takeaway from this event is that, generally speaking, the world of research is quite good at taking possession of spatial data”, explains Nicolas Bellec, operational director of GIS BreTel. “However, some field specialists, such as biologists or ecologists, sometimes have difficulty using these data in their research, and call on other labs specialized in the field of space.

Beyond the world of research, data from the Copernicus program were especially designed to help territorial authorities and regional and State services, to meet their own needs. But these actors, considered as the end users, do not use these data. “But the resolutions of the new satellite sensors are increasingly well-adapted to their needs!” states Nicolas Bellec. “The scope of applications is also very broad: maritime safety and security, land use and regional planning, monitoring of vegetation and biodiversity, adaptation to climate change, etc. At the Appspace forum, we tried to understand why.

By bringing together the worlds of research, companies and end users, the Appspace event highlighted the barriers to using spatial data and finding appropriate solutions. “What we found was that territorial managers lack training and information on these subjects. Researchers, companies and users want to create applications together, to better meet the needs of territorial managers” Nicolas Bellec explains.

The other difficulty is that spatial data can rarely be the sole solution to a concrete problem. They often must be used with other data, and in particular, field data, to find their place in applications. There are several ongoing projects which manage to incorporate spatial data into existing processes of information acquisition in the territories.

The Sésame project, created by Lab-STICC*, the teams Obelix and Myriads from IRISA and funded by the DGA and the ANR, crosses spatial data with AIS data from ships to develop applications for monitoring and surveillance of maritime traffic.

 

Develop technologies capable of handling data flows

The goal of the Sésame project is to develop technologies capable of detecting and giving real time documentation of unusual behavior of ships: illegal entries into defined areas, suspicious deviations from trajectories, illegal fishing, etc. To achieve such a result, high resolution photographs of the water’s surface produced by Sentinel satellites need to be crossed with AIS (Automatic Identification System) data emitted by ships. Each ship emits an AIS signal, which includes information on the ship itself, its route and position, at resolutions per minute. The challenge of the project is to process these extremely high flows of data. On top of the terabytes of data from the Sentinel satellite, tens of millions of AIS messages are produced each day.

CLS, our industrial partner, is a solutions operator for monitoring maritime traffic using satellite data. The current data processing chains will need to be reviewed in order to cope with the scale of the flows which are currently being produced” explains Ronan Fablet, professor and researcher in the Lab-STICC laboratory at IMT Atlantique and coordinator of the Sesame project. “The company is embarking on research and development processes to use Big Data and Machine Learning technology in monitoring maritime activity. The Sésame project is an integral part of this process.” With a consortium of teams specialized in Big Data, Machine Learning and remote detection, the goal of Sésame is to manage data flows with the development of suitable material and software infrastructure, and to develop machine learning techniques for detecting ships and unusual behavior in satellite images.

These technological developments are intended to be used first of all by CLS, then made available to operators such as the ESMA, the institution in charge of surveillance of European maritime areas. “Overall, the end users targeted by the project are the institutions in charge of maritime surveillance for regions, states, or groups of states” specifies Ronan Fablet.

Finally, as well as offering solutions to concrete issues of maritime traffic surveillance, the technologies developed by the Sésame project will pave the way for the use of already existing databases, by associating them with other types of satellite imagery. With the development of adapted infrastructure and Big Data technology, the gigantic data flows produced by Sentinel satellites will also be channeled, processed and interpreted, to serve the development of many other applications designed for end users.

* Members of Lab-STICC: IMT Atlantique, UBO, UBS, CNRS, ENIB and ENSTA Bretagne

Also read on I’MTech:

[one_half]

[/one_half][one_half_last]

[/one_half_last]

 

When Science Fiction Helps Popularize Science – An Interview with Roland Lehoucq

What is energy? What is power? Roland Lehoucq, an astrophysicist at CEA and professor at École Polytechnique and Sciences Po, uses Science Fiction to help explain scientific principles to the general public. Star Wars, Interstellar, The Martian… These well-known, popular movies can become springboards for making science accessible to everyone. During his conference on “Energy, Science and Fiction” on December 7th at IMT Mines Albi, Roland Lehoucq explained his approach to popularizing science.

 

What approach are you taking for this conference on “Energy, Science and Fiction”? How do you use science fiction to study science?

The goal of this conference is to use science fiction as a springboard for talking to the general public about science. I chose the cross-cutting theme of energy and used several science fiction books and movies to talk about the topic. This drives us to ask questions about the world we live in: what prevents us from doing the things we see in science fiction? This question serves as a starting point for looking at scientific facts: explaining what energy and power are, providing some of the properties and orders of magnitude, etc. In general, the fictional situations involve levels of energy and power that are so significant that, for now, they are beyond our reach.  Humanity does have a great deal of energy within its grasp, which is why it has been able to radically transform planet Earth. But will this abundance of energy last? Will we someday reach the levels we see in science fiction? I’m not so sure!

My approach is actually the same as that of science fiction. It dramatizes scientific and technical progress and is designed to make us think about the consequences of these developments. This can apply to energy, genetics, artificial intelligence, robots, etc. It involves questioning reality, but it has no qualms about distorting the facts make a more appealing story. Works of fiction pay no attention to significant scientific facts, choosing to happily ignore certain physical laws, yet this is not truly a problem. It does not affect the works’ narrative quality, nor does it change the questions they raise!

Does this type of approach allow you to reach a wider audience? Do you see this at your speaking events?

I don’t know if I am reaching a wider audience, but I do see that those in the audience, both young and old, are delighted to talk about these subjects. I use some of the best-known films, although they are not necessarily the most interesting ones from a scientific point of view. While Star Wars does not feature a lot of high-level thinking, it is nevertheless full of content, including energy, which can be analyzed scientifically. For example, we can estimate the Jedis’ power in terms of watts and rank them. My approach is then to say: let’s imagine this really exists, let’s look at the information we can draw from the film and, in return, what we can learn about our world. Young people respond positively since I use things that are part of their culture. But it works well with other generations too!

What led you to share scientific culture using science fiction as the starting point?

I have loved science since I was 6 years old. I started reading science fiction when I was 13. Then I taught about science as a group leader at astronomy camps from the age of 17 to 23. I have always enjoyed learning things and then talking about the aspects I find to be the most interesting, amazing and wonderful! It comes naturally to me!

Then, in the early 2000s, I decided I wanted to share my knowledge on a larger scale, through books and articles. I quickly got idea of using fictional literature, comic strips and the cinema as a way of sharing knowledge. Especially since no one was doing it then! If you want to talk about astrophysics, for example, you have people like Hubert Reeves, Michel Cassé, Marc Lachièze-Rey and Jean-Pierre Luminet who are making this knowledge accessible. I did not want to repeat what they were already doing so well. I wanted to break away and do something different adapted to my tastes!

What advice would you give to researchers on improving how they share scientific culture?

Sharing scientific knowledge with others is not intuitive for researchers because it essentially involves making difficult choices of only saying what is most useful for the general public in a limited amount of time. Often researchers focus their life work, intelligence and efforts on a very limited topic.  Of course, researchers will want to talk about this area of expertise. But to understand the reasons that led the researcher to work in this area, the audience first needs certain prerequisites. And if these prerequisites are not provided, or are incomplete, the audience cannot understand the interest of the subject and the issues being discussed. It is therefore necessary to take the time to explain what researchers see as general information. Therefore, for one hour of a conference, forty-five minutes must be spent presenting the prerequisites and fifteen minutes spent explaining the field of research. This requires making a choice to serve the field, to take a back seat and avoid the “specialist syndrome”, which involves talking only about what the specialist sees as important, their 10 or 15 years of research. This is a legitimate approach, but by doing this researchers risk losing their audience!

They must also try to make science “friendly”. Science is often seen as something complicated, which requires great effort to be understood. As is often the case, a lot of work is needed to understand the subtleties of these subjects. Our job therefore consists of facilitating access to these areas, and the methods chosen will depend on each individual’s interests. Finally, we must show the general public that science is not an accumulation of knowledge, but an intellectual process, a methodology. We can therefore study science as an educational exercise, using things that are not purely scientific, such as science fiction!

[box type=”shadow” align=”” class=”” width=””]

Roland LehoucqRoland Lehoucq

Associate Professor of Physics and former student of ENS, Roland Lehoucq is an astrophysicist at the CEA center at Paris-Saclay, and teaches at the Ecole Polytechnique and at the Institut d’Etudes Politiques de Paris. He has written numerous books for the popularization of science using the science fiction as the starting point, such as La SF sous les feux de la science and Faire de la Science avec Star Wars. He recently wrote a book on the dark ideas of physics, Les idées noires de la physique, published by Les belles lettres, in collaboration with Vincent Bontems, a philosopher of science, and illustrated by Scott Pennors. Black holes, dark matter, dark energy… This book looks at all these subjects through the eyes of an astrophysicist and a philosopher.

 

[/box]

smart grid

What is a smart grid?

The driving force behind the modernization of the electrical network, the smart grid is full of promise. It will mean savings for consumers and energy companies alike. It terms of the environment, it provides a solution for developing renewable energies. Hossam Afifi, a researcher in networks at Télécom SudParis gives us a behind-the-scenes look at the smart grid.

 

What is the purpose of a smart grid?

Hossam Afifi: The idea behind a smart grid is to create savings by using a more intelligent electric network. The final objective is to avoid wasting energy, ensuring that each watt produced is used. We must first understand that today, the network is often run by electro-mechanical equipment that dates back to the 1960s. For the sake of simplicity, we will say it is controlled by engineers who use switches to remotely turn on or off the means of production and supply neighborhoods with energy. With the smart grid, all these tasks will be computerized. This is done in two steps. First, by introducing a measuring capacity using connected sensors and the Internet of Things. The control aspect is then added through machine learning to intelligently run the networks based on the data obtained via sensors, without any human intervention.

 

Can you give us some concrete examples of what the smart grid can do?

HA: One concrete example is the reduction of energy bills for cities, municipal authorities and hence local taxes, and major infrastructures. A more noticeable example is the architectural projects for buildings that feature both offices and housing, which are aimed at evening out the amount of power consumed over the course of the day and limiting excessive peaks in energy consumption during high-demand hours. The smart grid rethinks the way cities are designed. For example, business areas are not at all advantageous for energy suppliers. They require a lot of energy over short periods of time, especially between 5pm and 7pm. This requires generators to be used to ensure the same quality of service during these peak hours. And having to turn them on and off represents costs. The ideal solution would be to even out the use of energy, making it easier to optimize the service provided.  This is how the smart grid dovetails with smart city issues.

 

The smart grid is also presented as a solution to environmental problems. How are the two related?

HA: There is something very important we must understand: energy is difficult to store. This is one of the limits we face in the deployment of renewable energies, since solar, wind and marine energy sometimes produce electricity at times when we don’t need it. However, a network that can intelligently manage the energy production and distribution is beneficial for renewable energies. For example, electric car batteries can be used to store the energy produced by renewable sources. During peaks in consumption, users can choose to disconnect from the conventional network and use the energy stored by their car in the garage and receive financial compensation from their supplier. This is only possible with an intelligent network that can adapt the offer in real time based on large amounts of data on production and consumption.

 

How important is data in the deployment of smart grids?

HA: It is one of the most important aspects, of course. All of the network’s intelligence relies on data; it is what feeds the machine learning algorithms. This aspect alone requires support provided by research projects. We have submitted one proposal to the Saclay federation of municipalities, for example. We propose to establish data banks to collect data on production and consumption in that area. Open data is an important aspect of smart grid development.

 

What are the barriers to smart grid deployment?

HA: One of the biggest barriers is that of standardization. The smart grid concept came from the United States, where the objective is entirely different. The main concern there is to interconnect state networks, which up until now were independent, in order to prevent black-outs. In Europe, we drew on this concept to complement the deployment of renewable energies and energy savings. However, we also need to interconnect with other European states. And unlike the United States, we do not have the same network standards as our German and Italian neighbors. This means we have a lot of work to do at a European level to define common data formats and protocols. We are contributing to this work through our SEAS project led by EDF.

Also read on I’MTech:

[one_half]

[/one_half][one_half_last]

[/one_half_last]

Eikosim

Eikosim improves the dialogue between prototyping and simulation

Simulate. Prototype. Measure. Repeat. Developing an industrial part inevitably involves these steps. First comes the digital model. Then, its characteristics are assessed through simulation, after which, the first version of the part is built. The part must then be subject to mechanical stress to assess its resistance and be closely observed from every angle. The test results will be used to improve the modelling, which will produce a new prototype… and so the cycle continues until a satisfactory version is produced. But Renaud Gras and Florent Mathieu want to reduce the repetitions involved in this cycle, which is why they created Eikosim, a startup that has been incubating at Paristech Entrepreneurs for one year. They develop software specialized in helping engineers with these design stages.

So, what is the key to saving as much time as possible? Facilitating the comparison between the digital tests and the measurements. Eikosim meets this need by integrating the measurement results recorded during testing directly into the part’s digital model. Any deformation, cracking or change in the mechanical properties is therefore recorded in the digital version of the object. The engineers can then easily compare the changes measured during the tests with those predicted during simulation, and therefore automatically correct the simulation so that it better reflects reality. What this startup offers is a breakthrough solution, since the traditional alternative involves storing the real measurements in a data table, and creating algorithms for manually readjusting the part through simulation. A tedious and time-consuming process.

Another strength the startup has to offer: its software can optimize the measurements of prototypes, for example by facilitating the positioning of observation cameras. One of the challenges is to ensure their actual position is well calibrated to correctly record the movements. To achieve this, the cameras are usually positioned using an alignment jig and arranged using a complex procedure which, again, is time-consuming. But the Eikosim software makes it possible to directly record the cameras’ positions on a digital model of the part. Since an alignment jig is no longer needed, the calibration is much faster. The technology is therefore compatible with large-scale parts, such as the chassis of trains. These dimensions are too large for technology offered by competitors, which struggles to arrange many cameras around such enormous parts.

The startup’s solutions have won over manufacturers, especially in aeronautics. The sector innovates materials, but must constantly address safety constraints. The accuracy of the simulations is therefore essential. In this industry, 20% of engineers’ time is spent making comparisons between simulation and real tests. The powerful software developed by Eikosim is therefore represents an enormous advantage in reducing development times.

The founders

[divider style=”normal” top=”20″ bottom=”20″]

Florent Mathieu - Eikosim

Florent Mathieu

Renaud Gras - Eikosim

Renaud Gras and Florent Mathieu founded Eikosim after completing a thesis at the ENS Paris-Saclay Laboratory of Mechanics and Technology. Equipped with their expertise in understanding the mechanical behavior of materials by instrumenting tests using imaging techniques, they now want to use their startup to pass these skills on to the manufacturing industry.

[divider style=”normal” top=”20″ bottom=”20″]

passwords

Passwords: security, vulnerability and constraints

Hervé Debar, Télécom SudParis – Institut Mines-Télécom, Université Paris-Saclay

[divider style=”normal” top=”20″ bottom=”20″]

What is a password?

A password is a secret linked to an identity. It associates two elements, what we own (a bank card, badge, telephone, fingerprint) and what we know (password or code).

Passwords are very widely used, for computers, telephones, banking. The simplest form is the numerical code (PIN), with 4 to 6 numbers. Our smartphones therefore use two PIN codes, one to unlock the device, and another associated with the SIM card, to access the network. Passwords are most commonly associated with internet services (email, social networks, e-commerce, etc.).

Today, in practical terms, identity is linked to an email address. A website uses it to identify a person. The password is a secret, known by both the server and the user, making it possible to “prove” to the server that the identity provided is authentic. Since an email address is often public, knowing this address is not enough for recognizing a user. The password is used as a lock on this identity. Therefore, passwords are stored on the websites we log in to.

What is the risk associated with this password?

The main risk is password theft, in which the associated identity is stolen. A password must be kept hidden, so that it remains secret, preventing identity theft when incidents arise, such as the theft of Yahoo usernames.

Therefore, a website doesn’t (or shouldn’t) save passwords directly. It uses a hash function to calculate the footprint, such as the bcrypt function Facebook uses. With the password, it is very easy to calculate the footprint and verify that it is correct. On the other hand, it is very difficult mathematically to find the code if only the footprint is known.

Searching for a password by following the footprint

Unfortunately, technological progress has made brute force password search tools, like “John the Ripper” extremely effective. As a result, an attacker can find passwords fairly easily using footprints.

The attacker can therefore capture passwords, for example by tricking the user. Social engineering (phishing) causes users to connect to a website that imitates the one they intended to connect to, thus allowing the attacker to steal their login information (email and password).

Many services (social networks, shops, banks) require user identification and authentication. It is important be sure we are connecting to the right website, and that the connection is encrypted (lock, green color in the browser address bar), to prevent these passwords from being compromised.

Can we protect ourselves, and how?

For a long time, the main risk involved sharing computers. Writing your password on a post-it note on the desk was therefore prohibited. Today, in a lot of environments, this is a pragmatic and effective way of keeping the secret.

The main risk today involves to the fact that an email address is associated with the passwords. This universal username is therefore extremely sensitive, and naturally it is a target for hackers. It is therefore important to identify all the possible means an email service provider offers to protect this address and connection. These mechanisms can include a code being sent by SMS to a mobile phone, a recovery email address, pre-printed one-time use codes, etc. These methods control access to your email address by alerting you of attempts to compromise your account, and help you regain access if you lose your password.

For personal use

Another danger involves passwords being reused for several websites. Attacks on websites are very common, and levels of protection vary greatly. Reusing one password on several websites therefore very significantly increases the risk of it being compromised. Currently, the best practice is to therefore to use a password manager, or digital safe (like KeePass or Password Safe, free and open software), to save a different password for each website.

The automatic password generation function offered by these managers provides passwords that are more difficult to guess. This greatly simplifies what users need to remember and significantly improves security.

It is also good to keep the database on a flash drive, and to save it frequently. There are also cloud password management solutions. Personally, I do not use them, because I want to be able to maintain control of the technology. That could prevent me, for example, from using a smart phone in certain environments.

For professionals

Changing passwords frequently is often mandatory in the professional world. It is often seen as a constraint, which is amplified by the required length, variety of characters, the impossibility of using old passwords, etc. Experience has shown that too many constraints lead users to choose passwords that are less secure.

It is recommended to use an authentication token (chip card, USB token, OTP, etc.). At a limited cost, this offers a significant level of security and additional services such as remote access, email and document signature, and protection for the intranet service.

Important reminders to avoid password theft or limit its impact

Passwords, associated with email addresses, are a critical element in the use of internet services. Currently, the two key precautions recommended for safe use is to have one password per service (if possible generated randomly and kept in a digital safe) and to be careful to secure sensitive services, such as email addresses and login information (by using the protective measures provided by these services, including double authentication via SMS or recovery codes, and remaining vigilant if anything abnormality is detected). You can find more recommendations on the ANSSI website.

Hervé Debar, Head of the Telecommunications Networks and Services department at Télécom SudParis, Télécom SudParis – Institut Mines-Télécom, Université Paris-Saclay

The original version of this article was published in French on The Conversation France.

Also read on I’MTech

[one_half]

[/one_half][one_half_last]

[/one_half_last]

facial expressions

Our expressions under the algorithmic microscope

Mohamed Daoudi, a researcher at IMT Lille Douai, is interested in the recognition of facial expressions in videos. His work is based on geometrical analysis of the face and machine learning algorithms. They may pave the way for applications in the field of medicine.

 

Anger, sadness, happiness, surprise, fear, disgust. Six emotions which are represented in humans by universal facial expressions, regardless of our culture. This was proven in Paul Ekman’s work, published in the 60s and 70s. Fifty years on, scientists are using these results to automate the recognition of facial expressions in videos, using algorithms for analyzing shapes. This is what Mohamed Daoudi, a researcher at IMT Lille Douai, is doing, using computer vision.

We are developing digital tools which allow us to place characteristic points on the image of a face: in the corners of the lips, around the eyes, the nose, etc.” Mohamed Daoudi explains. This operation is carried out automatically, for each image of a video. Once this step is finished, the researcher has a dynamic model of the face in the form of points which change over time. The movements of these points, as well as their relative positions, give indications on the facial expressions. As each expression is characteristic, the way in which these points move over time corresponds to an expression.

The models created using points on the face are then processed by machine learning tools. “We train our algorithms on databases which allow them to learn the dynamics of the characteristic points of happiness or fear” Mohamed Daoudi explains. By comparing new measurements of faces with this database, the algorithm can classify a new video analysis of an expression into one of six categories.

This type of work is of interest to several industrial sectors. For instance, for observing customer satisfaction when purchasing a product. The FUI Magnum project has taken an interest in the project. By observing a customer’s face, we could detect whether or not his experience was an enjoyable one. In this case, it is not necessarily about recognizing a precise expression, but more about describing his state as either positive or negative, and to what extent. “Sometimes this is largely sufficient, we do not need to determine whether the person is sad or happy in this type of situation” highlights Mohamed Daoudi.

The IMT Lille Douai researcher highlights the advantages of such a technology in the medical field, for example: “in psychiatry, practitioners look at expressions to get an indication of the psychological state of a patient, particularly for depression.” By using a camera and a computer or smartphone to help analyze these facial expressions, the psychiatrist can make an objective evaluation of the medication administered to the patient. A rigorous study of the changes in their face may help to detect pain in some patients who have difficulty expressing it. This is the goal of work by PhD student Taleb Alashkar, whose thesis is funded by IMT’s Futur & Ruptures (future and disruptive innovation) program and supervised by Mohamed Daoudi and Boulbaba Ben Amor. “We have created an algorithm that can detect pain using 3D facial sequences” explains Mohamed Daoudi.

The researcher is careful not to present his research as emotional analysis. “We are working with recognition of facial expressions. Emotions are a step above this” he states. Although an expression relating to joy can be detected, we cannot conclude that the person is happy. For this to be possible, the algorithms would need to be able to say with certainty that the expression is not faked. Mohamed Daoudi explains that this remains a work in progress. The goal is indeed to introduce emotion into our machines, which will become increasingly intelligent.

[box type=”info” align=”” class=”” width=””]

From 3D to 2D

To improve facial recognition in 2D videos, researchers incorporate algorithms used in 3D for detecting shape and movement. Therefore, to study faces in 2D videos more easily, Mohamed Daoudi is capitalizing on the results of the ANR project Face Analyser, conducted with Centrale Lyon and university figures in China. Sometimes the changes are so small they are difficult to classify. This therefore requires creating digital tools making it possible to amplify them. With colleagues at the University of Beihang, Mohamed Daoudi’s team has managed to amplify the subtle geometrical deformations of the face to be able to classify them better.[/box]

 

startup Footbar

IoT: How to find your market? Footbar’s story

In the connected objects sector, the path to industrialization is rarely direct. Finding a market sometimes requires adapting the product, strategic repositioning, a little luck, or a combination of all three. Footbar is a striking example of how a startup can revise its original strategy to find customers while maintaining its initial vision. Sylvain Ract, one of the founders of the startup incubated at Télécom ParisTech, takes a look back at the story of his company.

 

Can you summarize the idea you had at the start of the Footbar project?

Sylvain Ract: My business partner and I wanted to make technology accessible to the entire soccer world. Professionals players have their statistics, but amateurs do not have much. The idea was to boost players’ enjoyment of the game by providing them with more information on their performance. My training in embedded systems at Télécom ParisTech was decisive in our choice to develop a connected object ourselves. This approach gave us more freedom than if we had started with an existing object, such as an activity tracker, and improved it with our own algorithms.

Where did you search for your first customers?

SR: When we started in 2015, we had a difficult time trying to sell our sensors to amateur clubs. The problem is, these organizations do not have much money. Outside of the professional level, clubs barely have the resources to purchase players’ jerseys and pay travel expenses. Another approach was to see the players as providing some of their own equipment; we could therefore directly target them as individuals. But mass-producing millions of sensors was too costly for a startup like ours.

How did you find your market?

SR: A little by chance. When we were just getting started we conducted a crowdfunding campaign. It was not successful because amateur players’ interest did not convert into financial contributions. This made us realize that the retail market was still immature. On the other hand, this campaign helped spread the word about our project. Later, the Foot à 5 Soccer Park network contacted us expressing interest in our sensors. The players who attend their centers are already used to an improved game experience since the matches are filmed. They were interested in going even further.

How did this meeting change things for you?

SR: The fact that Soccer Park films the players’ matches is a huge plus for us. This allowed us to create an enormous annotated database. We can also visually follow players who wear our device in their shin guards and clearly connect the facts observed during the game with the data from our devices’ accelerometers. We were therefore able to greatly improve our artificial intelligence algorithms. From a business perspective, we were able to expand our network to include other Foot à 5 centers in France and abroad, which gave us new perspectives.

What are your thoughts on this change of direction?

SR: Strangely enough, today we feel we are very much in line with our initial idea. Over the years we have changed our approach several times, whether from doubts or difficulties, but in the end, our current positioning is consistent with the idea of providing amateurs with this technology. We have a product that exists, customers who appreciate it and use it for enjoyment. What we are interested in is being involved in using digital technology to redefine how sports are experienced, in this case soccer. In the long-term, artificial intelligence will likely become increasingly prevalent in the competitive aspect, but the professional environment is not as big a market as one might think. Helping amateurs change the way they play is a challenge better suited to our startup.

 

smart cameras, safe city

Coming soon: “smart” cameras and citizens improving urban safety

Flavien Bazenet, Institut Mines-Telecom Business School, (IMT) and Gabriel Périès, Institut Mines-Telecom Business School, (IMT)

This article was written based on the research Augustin de la Ferrière carried out during his “Grande École” training at Institut Mines-Telecom Business School (IMT).

[divider style=”normal” top=”20″ bottom=”20″]

[dropcap]« S[/dropcap]afe cities »: seen by some as increasing the security and resilience of cities, others see it as an instance of ICTs (Information and Communication Technologies) being used in the move towards the society of control. The term has sparked much debate. Still, through balanced policies, the “Safe City” could become a part of a comprehensive “smart city” approach. Citizen crowdsourcing (security by citizens) and video analytics—“situational analysis that involves identifying events, attributes or behavior patterns to improve the coordination of resources and reduce investigation time” (source: IBM)—ensure the protection of privacy, and guarantee its cost and performance.

 

Safe cities and video protection

A “safe city” refers to NICT (New Information and Communication Technology) used for urban security purposes. However, in reality, the term is primarily linked to a marketing concept that major groups integrating the security sector have used to promote their video protection systems.

First appearing in the United Kingdom in the mid-1980s, urban cameras gradually became popularized. While their use is sometimes a subject of debate, in general they are well accepted by citizens, although this acceptance varies based on each country’s risk culture and approach to security matters. Today, nearly 250 million video protection systems are used throughout the world. On an international scale, this translates as one camera for every 30 inhabitants. But the effectiveness of these cameras is often called into question. It is therefore necessary to take a closer look at their role and actual effectiveness.

According to several French reports—in particular the “Report on the effectiveness of video protection by the French Ministry of the Interior, Overseas France and Territorial Communities” (2010) and ”Public policies on video protection: a look at the results” by INHESJ (2015)—the systems appear to be effective primarily in deterring minor criminal offences, reducing urban decay and improving interdepartmental cooperation in investigations.

 

The effectiveness of video protection limited by technical constraints

On the other hand, video protection has proven completely ineffective in preventing serious offences. The cameras appear only to be effective in confined spaces, and could even have a “publicity effect” for terrorist attacks. These characteristics have been confirmed by analysts in the sector, and are regularly emphasized by Tanguy Le Goff and Eric Heilmann, researchers and experts on this topic.

They also point out that our expectations for these systems are too high, and stress that the technical constraints are too significant, in addition to the excessive installation and maintenance costs.

To better explain the deficiencies in this kind of system, we must understand that in a remotely monitored city, a camera is constantly filming the city streets. It is connected to the “Urban monitoring center”, where the signal is transmitted to several screens. The images are then interpreted by one or more operators. But no human can be legitimately expected to remain concentrated on a multitude of screens for hours at time, especially when the operator-to-screen ratio is often extremely disproportional. In France, the ratio sometimes reaches one operator to one hundred screens! This is why the typical video protection system’s capacity for prevention is virtually nonexistent.

The technical experts imply that the real hope for video protection through forensic science—the ability to provide evidence—is nullified by the obvious technical constraints.

In a “typical” video protection system, the volume of data recorded by each camera is quite significant. According to one manufacturer’s (Axis Communications) estimate, with a camera capable of recording 24 images per second, the generated data ranges from 0.74 Go/hour to 5Go/hour depending on the encoding and chosen resolution. Therefore, the servers are quickly saturated, since current storage capabilities are limited.

With an average cost of approximately 50 euros per terabyte, local authorities and town halls find it difficult to afford datacenters capable of saving video recordings for a sufficient length of time. In France, the CNIL authorizes 30 days of saved video recordings, but in reality, these recordings are rarely saved for more than 7 consecutive days. For some experts, often these saved are not kept for more than 48 hours. Therefore, this undermines the main argument used in favor of video protection: the ability to provide evidence.

 

A move towards new smart video protection systems?

The only viable alternative to the “traditional” video protection system is that of “smart” video protection using video analytics or “VSI”: technology that uses algorithms and pixel analysis.

Since these cameras are generally supported by citizens, they must become more efficient, and not lead to a waste of financial and human resources. “Smart” cameras therefore offer two possibilities: biometric identification and situational analysis. These two components should enable the activation of automatic alarms for operators so that they can take action, which would mean the cameras would truly be used for prevention.

A massive installation of biometric identification is currently nearly impossible in France, since the CNIL is committed to the principles of purpose and proportionality: it is illegal to associate recorded data featuring citizens’ faces without first establishing a precise purpose for the use of this data. The Senate is currently studying this issue.

 

Smart video protection, safeguarding identity and personal data?

On the other hand, situational analysis offers an alternative that can tap into the full potential of video protection cameras. Through the analysis of situations, objects and behavior, real-time alerts are sent to video protection operators, a feature that restores hope in the system’s prevention capacity. This is in fact the logic behind the very controversial European surveillance project, INDECT: limit the recording of video, to focus only on pertinent information and automated alerts. This technology therefore makes it possible to opt for selective video recording, and even do away with it all together.

“Always being watched”… Here, in Bucharest (Romania), end of 2016. J. Stimp/Flickr, CC BY

VSI with situational analysis could offer some benefits for society, in terms of the effective security measures and the cost of deployment for taxpayers. VSI requires fewer operators than video protection, fewer cameras and fewer costly storage spaces. Referring to the common definition of a “smart city”—realistic interpretation of events, optimization of technical resources, more adaptive and resilient cities—this video protection approach would put “Safe Cities” at the heart of the smart city approach.

Nevertheless, several risks of abuse and potential errors exist, such as unwarranted alerts being generated, and they raise questions about the implementation of such measures.

 

Citizen crowdsourcing and bottom-up security approaches

The second characteristic of a “smart and safe city” must take people into account, citizens users—the city’s driving force. Security crowdsourcing is a phenomenon that finds its applications in our hyperconnected world through “ubiquitous” technology (smartphones, connected objects). The Boston Marathon bombing (2013), the London riots (2011), the Paris attacks (2015), and various natural catastrophes showed that citizens are not necessarily dependent on central governments, and could ensure their own security, or at least work together with the police and rescue services.

Social networks, Twitter, and Facebook with its “Safety Check” feature, are the main examples of this change. Similar applications quickly proliferated, such as Qwidam, SpotCrime, HeroPolis, and MyKeeper, and are breaking into the protection sector. On the other hand, these mobile solutions are struggling to take any ground in France due to a fear of false information being spread. Yet these initiatives offer true alternatives and should be studied and even encouraged. Without responsible citizens, there can be no resilient cities.

A study from 2016 shows that citizens are likely to use these emergency measures on their smartphones, and that they would make them feel safer.

Since the “smart city” relies on citizen, adaptive and ubiquitous intelligence, it is in our mutual interest to learn from bottom-up governance methods, in which information comes directly from the ground, so that a safe city could finally become a real component of the smart city approach.

 

Conclusion

Implementing major urban security projects without considering the issues involved in video protection and citizen intelligence leads to a waste of the public sector’s human and financial resources. The use of intelligent measures and the implementation of a citizen security policy would therefore help to create a balanced urbanization policy, a policy for safe and smart cities.

[divider style=”normal” top=”20″ bottom=”20″]

Flavien Bazenet, Associate professor for Entrepreneurship and Innovation at Institut Mines-Telecom Business School, (IMT) and Gabriel Périès, Professor, Department of Foreign languages and Humanities at Institut Mines-Telecom Business School, (IMT)

The original version of this article (in French) was published in The Conversation.

environmental odors

Learning to deal with offensive environmental odors

What is an offensive environmental odor? How can it be defined, and how should its consequences be managed? This is what students will learn in the serious game “Les ECSPER à Smellville”, part of the Air Quality MOOC. This educational tool was developed at IMT Lille Douai, and will be available in 2018. Players will be faced with the problem of an offensive environmental odor, and will have to identify its source and the components causing the smell, before stopping the emission and making a decision on its toxicity before a media crisis breaks out.

 

In January 2013, near Rouen, there was an incident in a manufacturing process at the Lubrizol company factory, leading to widespread emission of mercaptans, particularly evil-smelling gaseous compounds. The smell drifted throughout the Seine Valley and up to Paris, before being noticed the following day in England! This launched a crisis. The population panicked, with many people calling local emergency services, while the media latched onto the affair. However, despite the strong odor, the doses released into the atmosphere were well below the toxicity threshold. These gaseous pollutants simply caused what we refer to as an offensive environmental odor.

“There is often no predetermined link between an offensive environmental odor and toxicity… When we smell something new, we tend to compare it to similar smells. In the Lubrizol case, people smelt “gas”, and assimilated it with a potential danger” explains Sabine Crunaire, a researcher at IMT Lille Douai. “For most odorant compounds, the thresholds for detection by the human nose are much lower than the toxicity thresholds. Only a few compounds show a direct causal link between smell and toxicity. Hence the importance of being able to manage these situations early on, to prevent a media crisis from unfolding and causing unnecessary panic among the population.”

 

An educational game for learning how to manage offensive environmental odors

The game, “Les ECSPER à Smellville”, was inspired by the Lubrizol incident, and is part of the serious games series, Scientific Case Studies for Expertise and Research, developed at IMT Lille Douai. It is a digital educational tool which teaches players how to manage these delicate situations. It was created as a complement to the Air Quality MOOC, a scientific Bachelor’s degree level course which is open to anyone. The game is based on a situation where an offensive environmental smell appears after an industrial incident: a strong smell of gas, which the population associates with danger, causes a crisis.

The learner has a choice between two roles: Health and Safety Manager at the company responsible for the incident, or the head of the Certified Association for Monitoring Air Quality (AASQA). “For learners, the goal is to bring on board the actors who are involved in this type of situation, like safety services, prefectural or ministerial services, and understand when to inform them, with the right information. The scenario is a very realistic one, and corresponds exactly to a real case of crisis management” explains Sabine Crunaire, who contributed to the scientific content of the game. “Playing time is limited, and the action takes place in the space of one working day. The goal is to avoid the stage which the Lubrizol incident reached, which set off an avalanche of reactions on all levels: citizens, social networks, media, State departments, associations, etc.” The idea is to put an end to the problem as quickly as possible, identify the components released and evaluate the potential consequences in the immediate and wider environment. In the second scenario, the player also has to investigate and try to find the source of the emission, with the help of witness reports from nose judges.

Nose judges are local inhabitants trained in olfactory analysis. They describe the odors they perceive using a common language, like for example, the Langage des Nez®, developed by Atmo Normandie. These “noses” are sensitive to the usual odors in their environment, and are capable of distinguishing the different types of bad smells they are confronted with and describing them in a consensual way. They liken the perceived odor to a “reference smell”. This information will assist in the analyses for identifying the substances responsible for the odor. “For instance, according to the Langage des Nez, a “sulfur” smell corresponds to references such as hydrogen sulfide (H2S) but also ethyl-mercaptan or propyl mercaptan, which are similar molecules in terms of their olfactory properties” explains Sabine Crunaire. “Three, four, even five different references can be identified by a single nose, in a single odor! If we know the olfactory properties of the industries in a given geographical area, we can identify which one has upset the normal olfactory environment.”

 

Defining and characterizing offensive odors

But how can a smell be defined as offensive, based on the “notes” it contains and its intensity? “By definition, an offensive environmental odor is described as an individual or collective state of intolerance to a smell” explains Sabine Crunaire. Characterizing an odor as offensive therefore depends on three criteria. Firstly, the quality of the odor and the message it sends. Does the population associate it with a toxic, dangerous compound? For instance, the smell of exhaust fumes will have a negative connotation, and will therefore be more likely to be considered as an offensive environmental odor. Secondly, the social context in which the smell appears has an impact: a farm smell in a rural area will be seen as less offensive by the population than it would in central Paris. Finally, the duration, frequency, and timing of the odor may add to the negative impact. “Even a chocolate smell can be seen as offensive! If it happens in the morning from time to time, it can be quite nice, but if it is a strong smell which lasts throughout the day, it can become a problem!” Sabine Crunaire highlights.

From a regulatory point of view, prefectural and municipal orders can prevent manufacturers from creating excessive olfactory disturbances, which bother people in the surrounding environment. The thresholds are described in terms of the concentration of the odor and are expressed in European Odor Units (uoE.m-3). The concentration of a mix of smells is conventionally defined as the dilution factor than needs to be applied to the effluent so that it is no longer perceived as a smell by 50% of a sample of the population, this is referred to as the detection threshold. “Prefectural orders generally require that factories ensure that, within a distance of several kilometers from the boundary of the factory, the concentration of the odor does not surpass 5 uoE.m-3“ Sabine Crunaire explains. “It is very difficult for them to foresee whether the odors released are going to be over the limit. The nature of the compounds released, their concentration, the sensitivity of people in the surrounding area… there are many factors to take into account! There is no regulation which precisely sets a limit for the concentration of odors in the air, unlike what we have for fine particles.”

To avoid penalties, manufacturers conduct testing of compounds at their source and dilute them using olfactometers, in order to determine the dilution factor at which the odor unit is perceived as acceptable. They use this amount and the modelling system to evaluate the impact of their odor emissions within a predetermined perimeter, but also to measure the treatment systems to be installed.

“Besides penalties, the consequences of a crisis caused by an environmental disturbance are harmful to the manufacturer’s image: the Lubrizol incident is still referred to in the media, using the name of the incriminated company” says Sabine Crunaire. “And the consequences in the media probably also lead to significant direct and indirect economic consequences for the manufacturer: a decrease in the number of orders, the cost of new safety measures imposed by the State to prevent the issue happening again, etc.”

The game “Les ECSPER à Smellville” will therefore raise awareness of these issues among students and train them in managing this type of crisis and avoiding the serious consequences. While offensive environmental odors are rarely toxic, they cause disturbance, both for citizens and manufacturers.