Eikosim

Eikosim improves the dialogue between prototyping and simulation

Simulate. Prototype. Measure. Repeat. Developing an industrial part inevitably involves these steps. First comes the digital model. Then, its characteristics are assessed through simulation, after which, the first version of the part is built. The part must then be subject to mechanical stress to assess its resistance and be closely observed from every angle. The test results will be used to improve the modelling, which will produce a new prototype… and so the cycle continues until a satisfactory version is produced. But Renaud Gras and Florent Mathieu want to reduce the repetitions involved in this cycle, which is why they created Eikosim, a startup that has been incubating at Paristech Entrepreneurs for one year. They develop software specialized in helping engineers with these design stages.

So, what is the key to saving as much time as possible? Facilitating the comparison between the digital tests and the measurements. Eikosim meets this need by integrating the measurement results recorded during testing directly into the part’s digital model. Any deformation, cracking or change in the mechanical properties is therefore recorded in the digital version of the object. The engineers can then easily compare the changes measured during the tests with those predicted during simulation, and therefore automatically correct the simulation so that it better reflects reality. What this startup offers is a breakthrough solution, since the traditional alternative involves storing the real measurements in a data table, and creating algorithms for manually readjusting the part through simulation. A tedious and time-consuming process.

Another strength the startup has to offer: its software can optimize the measurements of prototypes, for example by facilitating the positioning of observation cameras. One of the challenges is to ensure their actual position is well calibrated to correctly record the movements. To achieve this, the cameras are usually positioned using an alignment jig and arranged using a complex procedure which, again, is time-consuming. But the Eikosim software makes it possible to directly record the cameras’ positions on a digital model of the part. Since an alignment jig is no longer needed, the calibration is much faster. The technology is therefore compatible with large-scale parts, such as the chassis of trains. These dimensions are too large for technology offered by competitors, which struggles to arrange many cameras around such enormous parts.

The startup’s solutions have won over manufacturers, especially in aeronautics. The sector innovates materials, but must constantly address safety constraints. The accuracy of the simulations is therefore essential. In this industry, 20% of engineers’ time is spent making comparisons between simulation and real tests. The powerful software developed by Eikosim is therefore represents an enormous advantage in reducing development times.

The founders

[divider style=”normal” top=”20″ bottom=”20″]

Florent Mathieu - Eikosim

Florent Mathieu

Renaud Gras - Eikosim

Renaud Gras and Florent Mathieu founded Eikosim after completing a thesis at the ENS Paris-Saclay Laboratory of Mechanics and Technology. Equipped with their expertise in understanding the mechanical behavior of materials by instrumenting tests using imaging techniques, they now want to use their startup to pass these skills on to the manufacturing industry.

[divider style=”normal” top=”20″ bottom=”20″]

passwords

Passwords: security, vulnerability and constraints

Hervé Debar, Télécom SudParis – Institut Mines-Télécom, Université Paris-Saclay

[divider style=”normal” top=”20″ bottom=”20″]

What is a password?

A password is a secret linked to an identity. It associates two elements, what we own (a bank card, badge, telephone, fingerprint) and what we know (password or code).

Passwords are very widely used, for computers, telephones, banking. The simplest form is the numerical code (PIN), with 4 to 6 numbers. Our smartphones therefore use two PIN codes, one to unlock the device, and another associated with the SIM card, to access the network. Passwords are most commonly associated with internet services (email, social networks, e-commerce, etc.).

Today, in practical terms, identity is linked to an email address. A website uses it to identify a person. The password is a secret, known by both the server and the user, making it possible to “prove” to the server that the identity provided is authentic. Since an email address is often public, knowing this address is not enough for recognizing a user. The password is used as a lock on this identity. Therefore, passwords are stored on the websites we log in to.

What is the risk associated with this password?

The main risk is password theft, in which the associated identity is stolen. A password must be kept hidden, so that it remains secret, preventing identity theft when incidents arise, such as the theft of Yahoo usernames.

Therefore, a website doesn’t (or shouldn’t) save passwords directly. It uses a hash function to calculate the footprint, such as the bcrypt function Facebook uses. With the password, it is very easy to calculate the footprint and verify that it is correct. On the other hand, it is very difficult mathematically to find the code if only the footprint is known.

Searching for a password by following the footprint

Unfortunately, technological progress has made brute force password search tools, like “John the Ripper” extremely effective. As a result, an attacker can find passwords fairly easily using footprints.

The attacker can therefore capture passwords, for example by tricking the user. Social engineering (phishing) causes users to connect to a website that imitates the one they intended to connect to, thus allowing the attacker to steal their login information (email and password).

Many services (social networks, shops, banks) require user identification and authentication. It is important be sure we are connecting to the right website, and that the connection is encrypted (lock, green color in the browser address bar), to prevent these passwords from being compromised.

Can we protect ourselves, and how?

For a long time, the main risk involved sharing computers. Writing your password on a post-it note on the desk was therefore prohibited. Today, in a lot of environments, this is a pragmatic and effective way of keeping the secret.

The main risk today involves to the fact that an email address is associated with the passwords. This universal username is therefore extremely sensitive, and naturally it is a target for hackers. It is therefore important to identify all the possible means an email service provider offers to protect this address and connection. These mechanisms can include a code being sent by SMS to a mobile phone, a recovery email address, pre-printed one-time use codes, etc. These methods control access to your email address by alerting you of attempts to compromise your account, and help you regain access if you lose your password.

For personal use

Another danger involves passwords being reused for several websites. Attacks on websites are very common, and levels of protection vary greatly. Reusing one password on several websites therefore very significantly increases the risk of it being compromised. Currently, the best practice is to therefore to use a password manager, or digital safe (like KeePass or Password Safe, free and open software), to save a different password for each website.

The automatic password generation function offered by these managers provides passwords that are more difficult to guess. This greatly simplifies what users need to remember and significantly improves security.

It is also good to keep the database on a flash drive, and to save it frequently. There are also cloud password management solutions. Personally, I do not use them, because I want to be able to maintain control of the technology. That could prevent me, for example, from using a smart phone in certain environments.

For professionals

Changing passwords frequently is often mandatory in the professional world. It is often seen as a constraint, which is amplified by the required length, variety of characters, the impossibility of using old passwords, etc. Experience has shown that too many constraints lead users to choose passwords that are less secure.

It is recommended to use an authentication token (chip card, USB token, OTP, etc.). At a limited cost, this offers a significant level of security and additional services such as remote access, email and document signature, and protection for the intranet service.

Important reminders to avoid password theft or limit its impact

Passwords, associated with email addresses, are a critical element in the use of internet services. Currently, the two key precautions recommended for safe use is to have one password per service (if possible generated randomly and kept in a digital safe) and to be careful to secure sensitive services, such as email addresses and login information (by using the protective measures provided by these services, including double authentication via SMS or recovery codes, and remaining vigilant if anything abnormality is detected). You can find more recommendations on the ANSSI website.

Hervé Debar, Head of the Telecommunications Networks and Services department at Télécom SudParis, Télécom SudParis – Institut Mines-Télécom, Université Paris-Saclay

The original version of this article was published in French on The Conversation France.

Also read on I’MTech

[one_half]

[/one_half][one_half_last]

[/one_half_last]

facial expressions

Our expressions under the algorithmic microscope

Mohamed Daoudi, a researcher at IMT Lille Douai, is interested in the recognition of facial expressions in videos. His work is based on geometrical analysis of the face and machine learning algorithms. They may pave the way for applications in the field of medicine.

 

Anger, sadness, happiness, surprise, fear, disgust. Six emotions which are represented in humans by universal facial expressions, regardless of our culture. This was proven in Paul Ekman’s work, published in the 60s and 70s. Fifty years on, scientists are using these results to automate the recognition of facial expressions in videos, using algorithms for analyzing shapes. This is what Mohamed Daoudi, a researcher at IMT Lille Douai, is doing, using computer vision.

We are developing digital tools which allow us to place characteristic points on the image of a face: in the corners of the lips, around the eyes, the nose, etc.” Mohamed Daoudi explains. This operation is carried out automatically, for each image of a video. Once this step is finished, the researcher has a dynamic model of the face in the form of points which change over time. The movements of these points, as well as their relative positions, give indications on the facial expressions. As each expression is characteristic, the way in which these points move over time corresponds to an expression.

The models created using points on the face are then processed by machine learning tools. “We train our algorithms on databases which allow them to learn the dynamics of the characteristic points of happiness or fear” Mohamed Daoudi explains. By comparing new measurements of faces with this database, the algorithm can classify a new video analysis of an expression into one of six categories.

This type of work is of interest to several industrial sectors. For instance, for observing customer satisfaction when purchasing a product. The FUI Magnum project has taken an interest in the project. By observing a customer’s face, we could detect whether or not his experience was an enjoyable one. In this case, it is not necessarily about recognizing a precise expression, but more about describing his state as either positive or negative, and to what extent. “Sometimes this is largely sufficient, we do not need to determine whether the person is sad or happy in this type of situation” highlights Mohamed Daoudi.

The IMT Lille Douai researcher highlights the advantages of such a technology in the medical field, for example: “in psychiatry, practitioners look at expressions to get an indication of the psychological state of a patient, particularly for depression.” By using a camera and a computer or smartphone to help analyze these facial expressions, the psychiatrist can make an objective evaluation of the medication administered to the patient. A rigorous study of the changes in their face may help to detect pain in some patients who have difficulty expressing it. This is the goal of work by PhD student Taleb Alashkar, whose thesis is funded by IMT’s Futur & Ruptures (future and disruptive innovation) program and supervised by Mohamed Daoudi and Boulbaba Ben Amor. “We have created an algorithm that can detect pain using 3D facial sequences” explains Mohamed Daoudi.

The researcher is careful not to present his research as emotional analysis. “We are working with recognition of facial expressions. Emotions are a step above this” he states. Although an expression relating to joy can be detected, we cannot conclude that the person is happy. For this to be possible, the algorithms would need to be able to say with certainty that the expression is not faked. Mohamed Daoudi explains that this remains a work in progress. The goal is indeed to introduce emotion into our machines, which will become increasingly intelligent.

[box type=”info” align=”” class=”” width=””]

From 3D to 2D

To improve facial recognition in 2D videos, researchers incorporate algorithms used in 3D for detecting shape and movement. Therefore, to study faces in 2D videos more easily, Mohamed Daoudi is capitalizing on the results of the ANR project Face Analyser, conducted with Centrale Lyon and university figures in China. Sometimes the changes are so small they are difficult to classify. This therefore requires creating digital tools making it possible to amplify them. With colleagues at the University of Beihang, Mohamed Daoudi’s team has managed to amplify the subtle geometrical deformations of the face to be able to classify them better.[/box]

 

startup Footbar

IoT: How to find your market? Footbar’s story

In the connected objects sector, the path to industrialization is rarely direct. Finding a market sometimes requires adapting the product, strategic repositioning, a little luck, or a combination of all three. Footbar is a striking example of how a startup can revise its original strategy to find customers while maintaining its initial vision. Sylvain Ract, one of the founders of the startup incubated at Télécom ParisTech, takes a look back at the story of his company.

 

Can you summarize the idea you had at the start of the Footbar project?

Sylvain Ract: My business partner and I wanted to make technology accessible to the entire soccer world. Professionals players have their statistics, but amateurs do not have much. The idea was to boost players’ enjoyment of the game by providing them with more information on their performance. My training in embedded systems at Télécom ParisTech was decisive in our choice to develop a connected object ourselves. This approach gave us more freedom than if we had started with an existing object, such as an activity tracker, and improved it with our own algorithms.

Where did you search for your first customers?

SR: When we started in 2015, we had a difficult time trying to sell our sensors to amateur clubs. The problem is, these organizations do not have much money. Outside of the professional level, clubs barely have the resources to purchase players’ jerseys and pay travel expenses. Another approach was to see the players as providing some of their own equipment; we could therefore directly target them as individuals. But mass-producing millions of sensors was too costly for a startup like ours.

How did you find your market?

SR: A little by chance. When we were just getting started we conducted a crowdfunding campaign. It was not successful because amateur players’ interest did not convert into financial contributions. This made us realize that the retail market was still immature. On the other hand, this campaign helped spread the word about our project. Later, the Foot à 5 Soccer Park network contacted us expressing interest in our sensors. The players who attend their centers are already used to an improved game experience since the matches are filmed. They were interested in going even further.

How did this meeting change things for you?

SR: The fact that Soccer Park films the players’ matches is a huge plus for us. This allowed us to create an enormous annotated database. We can also visually follow players who wear our device in their shin guards and clearly connect the facts observed during the game with the data from our devices’ accelerometers. We were therefore able to greatly improve our artificial intelligence algorithms. From a business perspective, we were able to expand our network to include other Foot à 5 centers in France and abroad, which gave us new perspectives.

What are your thoughts on this change of direction?

SR: Strangely enough, today we feel we are very much in line with our initial idea. Over the years we have changed our approach several times, whether from doubts or difficulties, but in the end, our current positioning is consistent with the idea of providing amateurs with this technology. We have a product that exists, customers who appreciate it and use it for enjoyment. What we are interested in is being involved in using digital technology to redefine how sports are experienced, in this case soccer. In the long-term, artificial intelligence will likely become increasingly prevalent in the competitive aspect, but the professional environment is not as big a market as one might think. Helping amateurs change the way they play is a challenge better suited to our startup.

 

smart cameras, safe city

Coming soon: “smart” cameras and citizens improving urban safety

Flavien Bazenet, Institut Mines-Telecom Business School, (IMT) and Gabriel Périès, Institut Mines-Telecom Business School, (IMT)

This article was written based on the research Augustin de la Ferrière carried out during his “Grande École” training at Institut Mines-Telecom Business School (IMT).

[divider style=”normal” top=”20″ bottom=”20″]

[dropcap]« S[/dropcap]afe cities »: seen by some as increasing the security and resilience of cities, others see it as an instance of ICTs (Information and Communication Technologies) being used in the move towards the society of control. The term has sparked much debate. Still, through balanced policies, the “Safe City” could become a part of a comprehensive “smart city” approach. Citizen crowdsourcing (security by citizens) and video analytics—“situational analysis that involves identifying events, attributes or behavior patterns to improve the coordination of resources and reduce investigation time” (source: IBM)—ensure the protection of privacy, and guarantee its cost and performance.

 

Safe cities and video protection

A “safe city” refers to NICT (New Information and Communication Technology) used for urban security purposes. However, in reality, the term is primarily linked to a marketing concept that major groups integrating the security sector have used to promote their video protection systems.

First appearing in the United Kingdom in the mid-1980s, urban cameras gradually became popularized. While their use is sometimes a subject of debate, in general they are well accepted by citizens, although this acceptance varies based on each country’s risk culture and approach to security matters. Today, nearly 250 million video protection systems are used throughout the world. On an international scale, this translates as one camera for every 30 inhabitants. But the effectiveness of these cameras is often called into question. It is therefore necessary to take a closer look at their role and actual effectiveness.

According to several French reports—in particular the “Report on the effectiveness of video protection by the French Ministry of the Interior, Overseas France and Territorial Communities” (2010) and ”Public policies on video protection: a look at the results” by INHESJ (2015)—the systems appear to be effective primarily in deterring minor criminal offences, reducing urban decay and improving interdepartmental cooperation in investigations.

 

The effectiveness of video protection limited by technical constraints

On the other hand, video protection has proven completely ineffective in preventing serious offences. The cameras appear only to be effective in confined spaces, and could even have a “publicity effect” for terrorist attacks. These characteristics have been confirmed by analysts in the sector, and are regularly emphasized by Tanguy Le Goff and Eric Heilmann, researchers and experts on this topic.

They also point out that our expectations for these systems are too high, and stress that the technical constraints are too significant, in addition to the excessive installation and maintenance costs.

To better explain the deficiencies in this kind of system, we must understand that in a remotely monitored city, a camera is constantly filming the city streets. It is connected to the “Urban monitoring center”, where the signal is transmitted to several screens. The images are then interpreted by one or more operators. But no human can be legitimately expected to remain concentrated on a multitude of screens for hours at time, especially when the operator-to-screen ratio is often extremely disproportional. In France, the ratio sometimes reaches one operator to one hundred screens! This is why the typical video protection system’s capacity for prevention is virtually nonexistent.

The technical experts imply that the real hope for video protection through forensic science—the ability to provide evidence—is nullified by the obvious technical constraints.

In a “typical” video protection system, the volume of data recorded by each camera is quite significant. According to one manufacturer’s (Axis Communications) estimate, with a camera capable of recording 24 images per second, the generated data ranges from 0.74 Go/hour to 5Go/hour depending on the encoding and chosen resolution. Therefore, the servers are quickly saturated, since current storage capabilities are limited.

With an average cost of approximately 50 euros per terabyte, local authorities and town halls find it difficult to afford datacenters capable of saving video recordings for a sufficient length of time. In France, the CNIL authorizes 30 days of saved video recordings, but in reality, these recordings are rarely saved for more than 7 consecutive days. For some experts, often these saved are not kept for more than 48 hours. Therefore, this undermines the main argument used in favor of video protection: the ability to provide evidence.

 

A move towards new smart video protection systems?

The only viable alternative to the “traditional” video protection system is that of “smart” video protection using video analytics or “VSI”: technology that uses algorithms and pixel analysis.

Since these cameras are generally supported by citizens, they must become more efficient, and not lead to a waste of financial and human resources. “Smart” cameras therefore offer two possibilities: biometric identification and situational analysis. These two components should enable the activation of automatic alarms for operators so that they can take action, which would mean the cameras would truly be used for prevention.

A massive installation of biometric identification is currently nearly impossible in France, since the CNIL is committed to the principles of purpose and proportionality: it is illegal to associate recorded data featuring citizens’ faces without first establishing a precise purpose for the use of this data. The Senate is currently studying this issue.

 

Smart video protection, safeguarding identity and personal data?

On the other hand, situational analysis offers an alternative that can tap into the full potential of video protection cameras. Through the analysis of situations, objects and behavior, real-time alerts are sent to video protection operators, a feature that restores hope in the system’s prevention capacity. This is in fact the logic behind the very controversial European surveillance project, INDECT: limit the recording of video, to focus only on pertinent information and automated alerts. This technology therefore makes it possible to opt for selective video recording, and even do away with it all together.

“Always being watched”… Here, in Bucharest (Romania), end of 2016. J. Stimp/Flickr, CC BY

VSI with situational analysis could offer some benefits for society, in terms of the effective security measures and the cost of deployment for taxpayers. VSI requires fewer operators than video protection, fewer cameras and fewer costly storage spaces. Referring to the common definition of a “smart city”—realistic interpretation of events, optimization of technical resources, more adaptive and resilient cities—this video protection approach would put “Safe Cities” at the heart of the smart city approach.

Nevertheless, several risks of abuse and potential errors exist, such as unwarranted alerts being generated, and they raise questions about the implementation of such measures.

 

Citizen crowdsourcing and bottom-up security approaches

The second characteristic of a “smart and safe city” must take people into account, citizens users—the city’s driving force. Security crowdsourcing is a phenomenon that finds its applications in our hyperconnected world through “ubiquitous” technology (smartphones, connected objects). The Boston Marathon bombing (2013), the London riots (2011), the Paris attacks (2015), and various natural catastrophes showed that citizens are not necessarily dependent on central governments, and could ensure their own security, or at least work together with the police and rescue services.

Social networks, Twitter, and Facebook with its “Safety Check” feature, are the main examples of this change. Similar applications quickly proliferated, such as Qwidam, SpotCrime, HeroPolis, and MyKeeper, and are breaking into the protection sector. On the other hand, these mobile solutions are struggling to take any ground in France due to a fear of false information being spread. Yet these initiatives offer true alternatives and should be studied and even encouraged. Without responsible citizens, there can be no resilient cities.

A study from 2016 shows that citizens are likely to use these emergency measures on their smartphones, and that they would make them feel safer.

Since the “smart city” relies on citizen, adaptive and ubiquitous intelligence, it is in our mutual interest to learn from bottom-up governance methods, in which information comes directly from the ground, so that a safe city could finally become a real component of the smart city approach.

 

Conclusion

Implementing major urban security projects without considering the issues involved in video protection and citizen intelligence leads to a waste of the public sector’s human and financial resources. The use of intelligent measures and the implementation of a citizen security policy would therefore help to create a balanced urbanization policy, a policy for safe and smart cities.

[divider style=”normal” top=”20″ bottom=”20″]

Flavien Bazenet, Associate professor for Entrepreneurship and Innovation at Institut Mines-Telecom Business School, (IMT) and Gabriel Périès, Professor, Department of Foreign languages and Humanities at Institut Mines-Telecom Business School, (IMT)

The original version of this article (in French) was published in The Conversation.

environmental odors

Learning to deal with offensive environmental odors

What is an offensive environmental odor? How can it be defined, and how should its consequences be managed? This is what students will learn in the serious game “Les ECSPER à Smellville”, part of the Air Quality MOOC. This educational tool was developed at IMT Lille Douai, and will be available in 2018. Players will be faced with the problem of an offensive environmental odor, and will have to identify its source and the components causing the smell, before stopping the emission and making a decision on its toxicity before a media crisis breaks out.

 

In January 2013, near Rouen, there was an incident in a manufacturing process at the Lubrizol company factory, leading to widespread emission of mercaptans, particularly evil-smelling gaseous compounds. The smell drifted throughout the Seine Valley and up to Paris, before being noticed the following day in England! This launched a crisis. The population panicked, with many people calling local emergency services, while the media latched onto the affair. However, despite the strong odor, the doses released into the atmosphere were well below the toxicity threshold. These gaseous pollutants simply caused what we refer to as an offensive environmental odor.

“There is often no predetermined link between an offensive environmental odor and toxicity… When we smell something new, we tend to compare it to similar smells. In the Lubrizol case, people smelt “gas”, and assimilated it with a potential danger” explains Sabine Crunaire, a researcher at IMT Lille Douai. “For most odorant compounds, the thresholds for detection by the human nose are much lower than the toxicity thresholds. Only a few compounds show a direct causal link between smell and toxicity. Hence the importance of being able to manage these situations early on, to prevent a media crisis from unfolding and causing unnecessary panic among the population.”

 

An educational game for learning how to manage offensive environmental odors

The game, “Les ECSPER à Smellville”, was inspired by the Lubrizol incident, and is part of the serious games series, Scientific Case Studies for Expertise and Research, developed at IMT Lille Douai. It is a digital educational tool which teaches players how to manage these delicate situations. It was created as a complement to the Air Quality MOOC, a scientific Bachelor’s degree level course which is open to anyone. The game is based on a situation where an offensive environmental smell appears after an industrial incident: a strong smell of gas, which the population associates with danger, causes a crisis.

The learner has a choice between two roles: Health and Safety Manager at the company responsible for the incident, or the head of the Certified Association for Monitoring Air Quality (AASQA). “For learners, the goal is to bring on board the actors who are involved in this type of situation, like safety services, prefectural or ministerial services, and understand when to inform them, with the right information. The scenario is a very realistic one, and corresponds exactly to a real case of crisis management” explains Sabine Crunaire, who contributed to the scientific content of the game. “Playing time is limited, and the action takes place in the space of one working day. The goal is to avoid the stage which the Lubrizol incident reached, which set off an avalanche of reactions on all levels: citizens, social networks, media, State departments, associations, etc.” The idea is to put an end to the problem as quickly as possible, identify the components released and evaluate the potential consequences in the immediate and wider environment. In the second scenario, the player also has to investigate and try to find the source of the emission, with the help of witness reports from nose judges.

Nose judges are local inhabitants trained in olfactory analysis. They describe the odors they perceive using a common language, like for example, the Langage des Nez®, developed by Atmo Normandie. These “noses” are sensitive to the usual odors in their environment, and are capable of distinguishing the different types of bad smells they are confronted with and describing them in a consensual way. They liken the perceived odor to a “reference smell”. This information will assist in the analyses for identifying the substances responsible for the odor. “For instance, according to the Langage des Nez, a “sulfur” smell corresponds to references such as hydrogen sulfide (H2S) but also ethyl-mercaptan or propyl mercaptan, which are similar molecules in terms of their olfactory properties” explains Sabine Crunaire. “Three, four, even five different references can be identified by a single nose, in a single odor! If we know the olfactory properties of the industries in a given geographical area, we can identify which one has upset the normal olfactory environment.”

 

Defining and characterizing offensive odors

But how can a smell be defined as offensive, based on the “notes” it contains and its intensity? “By definition, an offensive environmental odor is described as an individual or collective state of intolerance to a smell” explains Sabine Crunaire. Characterizing an odor as offensive therefore depends on three criteria. Firstly, the quality of the odor and the message it sends. Does the population associate it with a toxic, dangerous compound? For instance, the smell of exhaust fumes will have a negative connotation, and will therefore be more likely to be considered as an offensive environmental odor. Secondly, the social context in which the smell appears has an impact: a farm smell in a rural area will be seen as less offensive by the population than it would in central Paris. Finally, the duration, frequency, and timing of the odor may add to the negative impact. “Even a chocolate smell can be seen as offensive! If it happens in the morning from time to time, it can be quite nice, but if it is a strong smell which lasts throughout the day, it can become a problem!” Sabine Crunaire highlights.

From a regulatory point of view, prefectural and municipal orders can prevent manufacturers from creating excessive olfactory disturbances, which bother people in the surrounding environment. The thresholds are described in terms of the concentration of the odor and are expressed in European Odor Units (uoE.m-3). The concentration of a mix of smells is conventionally defined as the dilution factor than needs to be applied to the effluent so that it is no longer perceived as a smell by 50% of a sample of the population, this is referred to as the detection threshold. “Prefectural orders generally require that factories ensure that, within a distance of several kilometers from the boundary of the factory, the concentration of the odor does not surpass 5 uoE.m-3“ Sabine Crunaire explains. “It is very difficult for them to foresee whether the odors released are going to be over the limit. The nature of the compounds released, their concentration, the sensitivity of people in the surrounding area… there are many factors to take into account! There is no regulation which precisely sets a limit for the concentration of odors in the air, unlike what we have for fine particles.”

To avoid penalties, manufacturers conduct testing of compounds at their source and dilute them using olfactometers, in order to determine the dilution factor at which the odor unit is perceived as acceptable. They use this amount and the modelling system to evaluate the impact of their odor emissions within a predetermined perimeter, but also to measure the treatment systems to be installed.

“Besides penalties, the consequences of a crisis caused by an environmental disturbance are harmful to the manufacturer’s image: the Lubrizol incident is still referred to in the media, using the name of the incriminated company” says Sabine Crunaire. “And the consequences in the media probably also lead to significant direct and indirect economic consequences for the manufacturer: a decrease in the number of orders, the cost of new safety measures imposed by the State to prevent the issue happening again, etc.”

The game “Les ECSPER à Smellville” will therefore raise awareness of these issues among students and train them in managing this type of crisis and avoiding the serious consequences. While offensive environmental odors are rarely toxic, they cause disturbance, both for citizens and manufacturers.

attack

When the internet goes down

Hervé Debar, Télécom SudParis – Institut Mines-Télécom, Université Paris-Saclay

[divider style=”normal” top=”20″ bottom=”20″]

[dropcap]”A[/dropcap]third of the internet is under attack. Millions of network addresses were subjected to distributed denial-of-service (DDoS) attacks over two-year period,” reports Warren Froelich on the UC San Diego News Center website. A DDoS is a type of denial-of-service (DoS) attack in which the attacker carries out an attack using many sources distributed throughout the network.

But is the journalist justified in his alarmist reaction? Yes and no. If one third of the internet was under attack, then one in every three smartphones wouldn’t work, and one in every three computers would be offline. When we look around, we can see that this is obviously not the case, and if we now rely so heavily on our phones and Wikipedia, it is because we have come to view the internet as a network that functions well.

Still, the DDoS phenomenon is real. Recent attacks testify to this, such as botnet Mirai’s attack on the French web host OVH, and the American web host DynDNS falling victim to the same botnet.

The websites owned by customers of these servers were unavailable for several hours.

What the article really looks at is the appearance of IP addresses in the traces of DDoS attacks. Over a period of two years, the authors found the addresses of two million different victims, out of the 6 million servers listed on the web.

Traffic jams on the information superhighway

Units of data, called packets, circulate on the internet network. When all of these packets want to go to the same place or take the same path, congestion occurs, just like the traffic jams that occur at the end of a workday.

It should be noted that in most cases it is very difficult, almost impossible, to differentiate between normal traffic and denial of service attack traffic. Traffic generated by “Flash crowd” and “slashdot effect” phenomena is identical to the traffic witnessed during this type of attack.

However, this analogy only goes so far, since packets are often organized in flows, and the congestion on the network can lead to these packets being destroyed, or the creation of new packets, leading to even more congestion. It is therefore much harder to remedy a denial-of-service attack on the web than it is a traffic jam.

attaques

Diagram of a deny of service attack. Everaldo Coelho and YellowIcon

 

This type of attack saturates the network link that connects the server to the internet. The attacker does this by sending a large number of packets to the targeted server. These packets can be sent directly if the attacker controls a large number of machines, a botnet.

Attackers also use the amplification mechanisms integrated in certain network protocols, such as the naming system (DNS) and clock synchronization (NTP). These protocols are asymmetrical. The requests are small, but the responses can be huge.

In this type of attack, an attacker contacts the DNS or NTP amplifiers by pretending to be a server that has been attacked. It then receives lots of unsolicited replies. Therefore, even with a limited connectivity, the attacker can create a significant level of traffic and saturate the network.

There are also “services” that offer the possibility of buying denial of service attacks with varying levels of intensity and durations, as shown in an investigation Brian Krebs carried out after his own site was attacked.

What are the consequences?

For internet users, the main consequence is that the website they want to visit is unavailable.

For the victim of the attack, the main consequence is a loss of income, which can take several forms. For a commercial website, for example, this loss is due to a lack of orders during that period. For other websites, it can result from losing advertising revenue. This type of attack allows an attacker to use ads in place of another party, enabling the attacker to tap into the revenue generated by displaying them.

There have been a few, rare institutional attacks. The most documented example is the attack against Estonia in 2007, which was attributed to the Russian government, although this has been impossible to prove.

Direct financial gain for the attacker is rare, however, and is linked to the ransom demands in exchange for ending the attack.

Is it serious?

The impact an attack has on a service depends on how popular the service is. Users therefore experience a low-level attack as a nuisance if they need to use the service in question.

Only certain large-scale occurrences, the most recent being the Mirai botnet, have impacts that are perceived by a much larger audience.

Many servers and services are located in private environments, and therefore are not accessible from the outside. Enterprise servers, for example, are rarely affected by this kind of attack. The key factor for vulnerability therefore lies in the outsourcing of IT services, which can create a dependence on the network.

Finally, an attack with a very high impact would, first of all, be detected immediately (and therefore often blocked within a few hours), and in the end would be limited by its own activities (since the attacker’s communication would also blocked), as shown by the old example of the SQL Slammer worm.

Ultimately, the study shows that the phenomena of denial-of-service attacks by saturation have been recurrent over the past two years. This news is significant enough to demonstrate that this phenomenon must be addressed. Yet this is not a new occurrence.

Other phenomena, such as routing manipulation, have the same consequences for users, like when Pakistan Telecom hijacked YouTube addresses.

Good IT hygiene

Unfortunately, there is no surefire form of protection against these attacks. In the end, it comes down to an issue of cost of service and the amount of resources made available for legitimate users.

The “big” service providers have so many resources that it is difficult for an attacker to catch them off guard.

Still, this is not the end of the internet, far from it. However, this phenomenon is one that should be limited. For users, good IT hygiene practices should be followed to limit the risks of their computer being compromised, and hence used to participate in this type of attack.

It is also important to review what type of protection outsourced service suppliers have established, to ensure sure they have sufficient capacity and means of protection.

[divider style=”normal” top=”20″ bottom=”20″]

Hervé Debar, Head of Department Networks and Telecommunications services, Télécom SudParis – Institut Mines-Télécom, Université Paris-Saclay

The original version of this article (in French) was published on The Conversation.

 

cyrating

Cyrating: a trusted third-party for cybersecurity assessment

Cyrating, a startup incubating at ParisTech Entrepreneurs, provides organizations the service of assessing their performance and efficiency in cybersecurity. By positioning itself as a trust third-party, it is meeting the needs of companies for an objective analysis of their cyber risk. The service allows companies to assess their position relative to competitors.

 

In the cybersecurity sector, Cyrating intends to play a role that organizations are often asking for, but as yet has never been provided: that of a trusted third-party. The startup that has been incubating at ParisTech Entrepreneurs since last September offers to assess the cybersecurity performance of public and private companies. The rating they receive allows them to position themselves relative to their competitors, as well as define areas for improvement and determine the cybersecurity level of their subsidiaries and suppliers.

Regardless of the type of company, the startup bases its assessment on the same criteria. This results in objective ratings that are not dependent on the organization’s size or structure. “For example, we look at the level of protection for domain names, company websites, email services…” explains François Gratiolet, co-founder of Cyrating. He calls these criteria “facts” and they are supplemented by an analysis of “events” such as a data breach or the hosting of malware on the internal server.

Cyrating processes a set of observable data with the aim of uncovering these facts and events related to the organization’s cybersecurity. They are then measured against best practices in order to obtain a rating. Based on assessment algorithms, metrics and ratings are automatically calculated by category. The organizations evaluated by Cyrating therefore obtain a clear view of their efficiency in a variety of cybersecurity issues, in addition to the overall rating. This enables them to identify the measures they must immediately implement to improve their protection and optimize their allocation of financial and human resources.

Unlike auditing and consulting firms, Cyrating’s service does not require any intervention in the organizations’ departments or offices. There is no need to install any software or equipment. Furthermore, the service is based on a subscription system. The rating is ongoing throughout the entire subscription period. Therefore, as they track the changes in their rating, organizations can immediately observe the impact of their actions.

The startup is the first of its kind in Europe. And few startups are offering this type of service on a global level. “It’s a business that is booming in the United States,” says François Gratiolet. This early entry into the European market is a serious advantage for Cyrating, whose business relies on a powerful platform that can be scaled up: as the time the company has been assessing organizations increases, the more attractive their rating system becomes. The startup officially launched its business in Lille in January 2018, at the International Cybersecurity Forum (FIC)—the largest European trade show in the sector. Over the course of the startup’s development and the creation of its use cases—still very recent, since the startup is only a few months old—it has already assessed hundreds of companies. “A year from now we expect to have rated over 50,000 organizations” the co-founder predicts.

The first businesses to be won over by Cyrating’s services were large and intermediate-sized companies. “They see the opportunity to measure the performance of their suppliers and subsidiaries, and optimize their audit cycles,” François Gratiolet explains. But insurance providers could also be interested in this service, as well as agencies that want to purchase data blocks for statistical purposes. By positioning itself as a trusted third-party, the startup could quickly become a key player in cybersecurity in France and Europe.

Madeleine Besson

Institut Mines-Telecom Business School | #Industryofthefuture #DigitalTransformation

[toggle title=”Find here all her articles on I’MTech” state=”open”]

[/toggle]

Neural Meta Tracts, brain, white matter, Pietro Gori

The brain: seeing between the fibers of white matter

The principle behind diffusion imaging and tractography is exploring how water spreads through our brain in order to study the structure of neurons. Doctors can use this method to improve their understanding of brain disease. Pietro Gori, a researcher in image processing at Télécom ParisTech, has just launched a project called Neural Meta Tracts, funded by the Emergence program at DigiCosme. It aims to improve modelling, visualization and manipulation of the large amounts of data produced by tractography. This may considerably improve the analysis of white matter in the brain, and in doing so, allow doctors to more easily pinpoint the morphological differences between healthy and sick patients.

 

What is the goal of the Neural Meta Tracts project?

Pietro Gori: The project stems from my past experience. I have worked in diffusion imaging, which is a non-invasive form of brain imaging, and tractography. This technique allows you to explore the architecture of the brain’s white matter, which is made up of bundles of several millions of neuron axons. Tractography allows us to represent these bundles in the form of curves in a 3D model of the brain. It is a very rich method which provides a great deal of information, but this information is difficult to visualize and make use of in digital calculations. Our goal with Neural Meta Tracts is to facilitate and accelerate the manipulation of these data.

Who can benefit from this type of improvement to tractography?  

PG: By making visualization easier, we are helping clinicians to interpret imaging results. This may help them to diagnose brain diseases more easily. Neurosurgeons can also gain from tractography in planning operations. If they are removing a tumor, they want to be sure that they do not cut fibers in the critical areas of the brain. The more precise the image is, the better prepared they can be. As for improvements to data manipulation and calculation, neurologists and radiologists doing research on the brain are highly interested. As they are dealing with large amounts of data, it can take time to compare sets of tractographies, for example when studying the impact of a particular structure on a particular disease.

Could this help us to understand certain diseases?

PG: Yes. In psychiatry and neurology, medical researchers want to compare healthy people with sick people. This enables them to study differences which may either be the consequence or the cause of the disease. In the case of Alzheimer’s, certain parts of the brain are atrophied. Improving mathematical modeling and visualization of tractography data can therefore help medical researchers to detect these anatomical changes in the brain. During my thesis, I also worked on Tourette syndrome. Through my work, we were able to highlight anatomical differences between healthy and sick subjects.

How do you improve the visualization and manipulation of tractography data?

PG: I am working with Jean-Marc Thiery and other lecturers and researchers at Télécom ParisTech and the École Polytechnique on applying differential geometry techniques. We analyze the geometry of bundles of neuron axons, and we try to approximate them as closely as possible without losing information. We are working on algorithms which will be able to rapidly compare two sets of tractography data. When we have similar sets of data, we try to aggregate them, again trying not to lose information. It is important to realize that if you have a database of a cohort of one thousand patients, it can take days of calculation using very powerful computers to compare their tractographies in order to find averages or main variations.

Who are you collaborating with on this project to obtain the tractography data and study the needs of practitioners?

PG: We use a high-quality freely-accessible database of healthy individuals, called the Human Connectome Project. We also collaborate with clinicians in the Pitié Salpêtrière, Sainte-Anne and Kremlin-Bicêtre hospitals in the Paris region. These are radiologists, neurologists and neurosurgeons. They provide their experience of the issues with which they are faced. We are initially focusing on three applications: Tourette syndrome, multiple sclerosis, and surgery on patients with tumors.

Also read on I’MTech:

[one_half]

[/one_half]

[one_half_last]

[/one_half_last]