smart cameras, safe city

Coming soon: “smart” cameras and citizens improving urban safety

Flavien Bazenet, Institut Mines-Telecom Business School, (IMT) and Gabriel Périès, Institut Mines-Telecom Business School, (IMT)

This article was written based on the research Augustin de la Ferrière carried out during his “Grande École” training at Institut Mines-Telecom Business School (IMT).

[divider style=”normal” top=”20″ bottom=”20″]

[dropcap]« S[/dropcap]afe cities »: seen by some as increasing the security and resilience of cities, others see it as an instance of ICTs (Information and Communication Technologies) being used in the move towards the society of control. The term has sparked much debate. Still, through balanced policies, the “Safe City” could become a part of a comprehensive “smart city” approach. Citizen crowdsourcing (security by citizens) and video analytics—“situational analysis that involves identifying events, attributes or behavior patterns to improve the coordination of resources and reduce investigation time” (source: IBM)—ensure the protection of privacy, and guarantee its cost and performance.

 

Safe cities and video protection

A “safe city” refers to NICT (New Information and Communication Technology) used for urban security purposes. However, in reality, the term is primarily linked to a marketing concept that major groups integrating the security sector have used to promote their video protection systems.

First appearing in the United Kingdom in the mid-1980s, urban cameras gradually became popularized. While their use is sometimes a subject of debate, in general they are well accepted by citizens, although this acceptance varies based on each country’s risk culture and approach to security matters. Today, nearly 250 million video protection systems are used throughout the world. On an international scale, this translates as one camera for every 30 inhabitants. But the effectiveness of these cameras is often called into question. It is therefore necessary to take a closer look at their role and actual effectiveness.

According to several French reports—in particular the “Report on the effectiveness of video protection by the French Ministry of the Interior, Overseas France and Territorial Communities” (2010) and ”Public policies on video protection: a look at the results” by INHESJ (2015)—the systems appear to be effective primarily in deterring minor criminal offences, reducing urban decay and improving interdepartmental cooperation in investigations.

 

The effectiveness of video protection limited by technical constraints

On the other hand, video protection has proven completely ineffective in preventing serious offences. The cameras appear only to be effective in confined spaces, and could even have a “publicity effect” for terrorist attacks. These characteristics have been confirmed by analysts in the sector, and are regularly emphasized by Tanguy Le Goff and Eric Heilmann, researchers and experts on this topic.

They also point out that our expectations for these systems are too high, and stress that the technical constraints are too significant, in addition to the excessive installation and maintenance costs.

To better explain the deficiencies in this kind of system, we must understand that in a remotely monitored city, a camera is constantly filming the city streets. It is connected to the “Urban monitoring center”, where the signal is transmitted to several screens. The images are then interpreted by one or more operators. But no human can be legitimately expected to remain concentrated on a multitude of screens for hours at time, especially when the operator-to-screen ratio is often extremely disproportional. In France, the ratio sometimes reaches one operator to one hundred screens! This is why the typical video protection system’s capacity for prevention is virtually nonexistent.

The technical experts imply that the real hope for video protection through forensic science—the ability to provide evidence—is nullified by the obvious technical constraints.

In a “typical” video protection system, the volume of data recorded by each camera is quite significant. According to one manufacturer’s (Axis Communications) estimate, with a camera capable of recording 24 images per second, the generated data ranges from 0.74 Go/hour to 5Go/hour depending on the encoding and chosen resolution. Therefore, the servers are quickly saturated, since current storage capabilities are limited.

With an average cost of approximately 50 euros per terabyte, local authorities and town halls find it difficult to afford datacenters capable of saving video recordings for a sufficient length of time. In France, the CNIL authorizes 30 days of saved video recordings, but in reality, these recordings are rarely saved for more than 7 consecutive days. For some experts, often these saved are not kept for more than 48 hours. Therefore, this undermines the main argument used in favor of video protection: the ability to provide evidence.

 

A move towards new smart video protection systems?

The only viable alternative to the “traditional” video protection system is that of “smart” video protection using video analytics or “VSI”: technology that uses algorithms and pixel analysis.

Since these cameras are generally supported by citizens, they must become more efficient, and not lead to a waste of financial and human resources. “Smart” cameras therefore offer two possibilities: biometric identification and situational analysis. These two components should enable the activation of automatic alarms for operators so that they can take action, which would mean the cameras would truly be used for prevention.

A massive installation of biometric identification is currently nearly impossible in France, since the CNIL is committed to the principles of purpose and proportionality: it is illegal to associate recorded data featuring citizens’ faces without first establishing a precise purpose for the use of this data. The Senate is currently studying this issue.

 

Smart video protection, safeguarding identity and personal data?

On the other hand, situational analysis offers an alternative that can tap into the full potential of video protection cameras. Through the analysis of situations, objects and behavior, real-time alerts are sent to video protection operators, a feature that restores hope in the system’s prevention capacity. This is in fact the logic behind the very controversial European surveillance project, INDECT: limit the recording of video, to focus only on pertinent information and automated alerts. This technology therefore makes it possible to opt for selective video recording, and even do away with it all together.

“Always being watched”… Here, in Bucharest (Romania), end of 2016. J. Stimp/Flickr, CC BY

VSI with situational analysis could offer some benefits for society, in terms of the effective security measures and the cost of deployment for taxpayers. VSI requires fewer operators than video protection, fewer cameras and fewer costly storage spaces. Referring to the common definition of a “smart city”—realistic interpretation of events, optimization of technical resources, more adaptive and resilient cities—this video protection approach would put “Safe Cities” at the heart of the smart city approach.

Nevertheless, several risks of abuse and potential errors exist, such as unwarranted alerts being generated, and they raise questions about the implementation of such measures.

 

Citizen crowdsourcing and bottom-up security approaches

The second characteristic of a “smart and safe city” must take people into account, citizens users—the city’s driving force. Security crowdsourcing is a phenomenon that finds its applications in our hyperconnected world through “ubiquitous” technology (smartphones, connected objects). The Boston Marathon bombing (2013), the London riots (2011), the Paris attacks (2015), and various natural catastrophes showed that citizens are not necessarily dependent on central governments, and could ensure their own security, or at least work together with the police and rescue services.

Social networks, Twitter, and Facebook with its “Safety Check” feature, are the main examples of this change. Similar applications quickly proliferated, such as Qwidam, SpotCrime, HeroPolis, and MyKeeper, and are breaking into the protection sector. On the other hand, these mobile solutions are struggling to take any ground in France due to a fear of false information being spread. Yet these initiatives offer true alternatives and should be studied and even encouraged. Without responsible citizens, there can be no resilient cities.

A study from 2016 shows that citizens are likely to use these emergency measures on their smartphones, and that they would make them feel safer.

Since the “smart city” relies on citizen, adaptive and ubiquitous intelligence, it is in our mutual interest to learn from bottom-up governance methods, in which information comes directly from the ground, so that a safe city could finally become a real component of the smart city approach.

 

Conclusion

Implementing major urban security projects without considering the issues involved in video protection and citizen intelligence leads to a waste of the public sector’s human and financial resources. The use of intelligent measures and the implementation of a citizen security policy would therefore help to create a balanced urbanization policy, a policy for safe and smart cities.

[divider style=”normal” top=”20″ bottom=”20″]

Flavien Bazenet, Associate professor for Entrepreneurship and Innovation at Institut Mines-Telecom Business School, (IMT) and Gabriel Périès, Professor, Department of Foreign languages and Humanities at Institut Mines-Telecom Business School, (IMT)

The original version of this article (in French) was published in The Conversation.

environmental odors

Learning to deal with offensive environmental odors

What is an offensive environmental odor? How can it be defined, and how should its consequences be managed? This is what students will learn in the serious game “Les ECSPER à Smellville”, part of the Air Quality MOOC. This educational tool was developed at IMT Lille Douai, and will be available in 2018. Players will be faced with the problem of an offensive environmental odor, and will have to identify its source and the components causing the smell, before stopping the emission and making a decision on its toxicity before a media crisis breaks out.

 

In January 2013, near Rouen, there was an incident in a manufacturing process at the Lubrizol company factory, leading to widespread emission of mercaptans, particularly evil-smelling gaseous compounds. The smell drifted throughout the Seine Valley and up to Paris, before being noticed the following day in England! This launched a crisis. The population panicked, with many people calling local emergency services, while the media latched onto the affair. However, despite the strong odor, the doses released into the atmosphere were well below the toxicity threshold. These gaseous pollutants simply caused what we refer to as an offensive environmental odor.

“There is often no predetermined link between an offensive environmental odor and toxicity… When we smell something new, we tend to compare it to similar smells. In the Lubrizol case, people smelt “gas”, and assimilated it with a potential danger” explains Sabine Crunaire, a researcher at IMT Lille Douai. “For most odorant compounds, the thresholds for detection by the human nose are much lower than the toxicity thresholds. Only a few compounds show a direct causal link between smell and toxicity. Hence the importance of being able to manage these situations early on, to prevent a media crisis from unfolding and causing unnecessary panic among the population.”

 

An educational game for learning how to manage offensive environmental odors

The game, “Les ECSPER à Smellville”, was inspired by the Lubrizol incident, and is part of the serious games series, Scientific Case Studies for Expertise and Research, developed at IMT Lille Douai. It is a digital educational tool which teaches players how to manage these delicate situations. It was created as a complement to the Air Quality MOOC, a scientific Bachelor’s degree level course which is open to anyone. The game is based on a situation where an offensive environmental smell appears after an industrial incident: a strong smell of gas, which the population associates with danger, causes a crisis.

The learner has a choice between two roles: Health and Safety Manager at the company responsible for the incident, or the head of the Certified Association for Monitoring Air Quality (AASQA). “For learners, the goal is to bring on board the actors who are involved in this type of situation, like safety services, prefectural or ministerial services, and understand when to inform them, with the right information. The scenario is a very realistic one, and corresponds exactly to a real case of crisis management” explains Sabine Crunaire, who contributed to the scientific content of the game. “Playing time is limited, and the action takes place in the space of one working day. The goal is to avoid the stage which the Lubrizol incident reached, which set off an avalanche of reactions on all levels: citizens, social networks, media, State departments, associations, etc.” The idea is to put an end to the problem as quickly as possible, identify the components released and evaluate the potential consequences in the immediate and wider environment. In the second scenario, the player also has to investigate and try to find the source of the emission, with the help of witness reports from nose judges.

Nose judges are local inhabitants trained in olfactory analysis. They describe the odors they perceive using a common language, like for example, the Langage des Nez®, developed by Atmo Normandie. These “noses” are sensitive to the usual odors in their environment, and are capable of distinguishing the different types of bad smells they are confronted with and describing them in a consensual way. They liken the perceived odor to a “reference smell”. This information will assist in the analyses for identifying the substances responsible for the odor. “For instance, according to the Langage des Nez, a “sulfur” smell corresponds to references such as hydrogen sulfide (H2S) but also ethyl-mercaptan or propyl mercaptan, which are similar molecules in terms of their olfactory properties” explains Sabine Crunaire. “Three, four, even five different references can be identified by a single nose, in a single odor! If we know the olfactory properties of the industries in a given geographical area, we can identify which one has upset the normal olfactory environment.”

 

Defining and characterizing offensive odors

But how can a smell be defined as offensive, based on the “notes” it contains and its intensity? “By definition, an offensive environmental odor is described as an individual or collective state of intolerance to a smell” explains Sabine Crunaire. Characterizing an odor as offensive therefore depends on three criteria. Firstly, the quality of the odor and the message it sends. Does the population associate it with a toxic, dangerous compound? For instance, the smell of exhaust fumes will have a negative connotation, and will therefore be more likely to be considered as an offensive environmental odor. Secondly, the social context in which the smell appears has an impact: a farm smell in a rural area will be seen as less offensive by the population than it would in central Paris. Finally, the duration, frequency, and timing of the odor may add to the negative impact. “Even a chocolate smell can be seen as offensive! If it happens in the morning from time to time, it can be quite nice, but if it is a strong smell which lasts throughout the day, it can become a problem!” Sabine Crunaire highlights.

From a regulatory point of view, prefectural and municipal orders can prevent manufacturers from creating excessive olfactory disturbances, which bother people in the surrounding environment. The thresholds are described in terms of the concentration of the odor and are expressed in European Odor Units (uoE.m-3). The concentration of a mix of smells is conventionally defined as the dilution factor than needs to be applied to the effluent so that it is no longer perceived as a smell by 50% of a sample of the population, this is referred to as the detection threshold. “Prefectural orders generally require that factories ensure that, within a distance of several kilometers from the boundary of the factory, the concentration of the odor does not surpass 5 uoE.m-3“ Sabine Crunaire explains. “It is very difficult for them to foresee whether the odors released are going to be over the limit. The nature of the compounds released, their concentration, the sensitivity of people in the surrounding area… there are many factors to take into account! There is no regulation which precisely sets a limit for the concentration of odors in the air, unlike what we have for fine particles.”

To avoid penalties, manufacturers conduct testing of compounds at their source and dilute them using olfactometers, in order to determine the dilution factor at which the odor unit is perceived as acceptable. They use this amount and the modelling system to evaluate the impact of their odor emissions within a predetermined perimeter, but also to measure the treatment systems to be installed.

“Besides penalties, the consequences of a crisis caused by an environmental disturbance are harmful to the manufacturer’s image: the Lubrizol incident is still referred to in the media, using the name of the incriminated company” says Sabine Crunaire. “And the consequences in the media probably also lead to significant direct and indirect economic consequences for the manufacturer: a decrease in the number of orders, the cost of new safety measures imposed by the State to prevent the issue happening again, etc.”

The game “Les ECSPER à Smellville” will therefore raise awareness of these issues among students and train them in managing this type of crisis and avoiding the serious consequences. While offensive environmental odors are rarely toxic, they cause disturbance, both for citizens and manufacturers.

attack

When the internet goes down

Hervé Debar, Télécom SudParis – Institut Mines-Télécom, Université Paris-Saclay

[divider style=”normal” top=”20″ bottom=”20″]

[dropcap]”A[/dropcap]third of the internet is under attack. Millions of network addresses were subjected to distributed denial-of-service (DDoS) attacks over two-year period,” reports Warren Froelich on the UC San Diego News Center website. A DDoS is a type of denial-of-service (DoS) attack in which the attacker carries out an attack using many sources distributed throughout the network.

But is the journalist justified in his alarmist reaction? Yes and no. If one third of the internet was under attack, then one in every three smartphones wouldn’t work, and one in every three computers would be offline. When we look around, we can see that this is obviously not the case, and if we now rely so heavily on our phones and Wikipedia, it is because we have come to view the internet as a network that functions well.

Still, the DDoS phenomenon is real. Recent attacks testify to this, such as botnet Mirai’s attack on the French web host OVH, and the American web host DynDNS falling victim to the same botnet.

The websites owned by customers of these servers were unavailable for several hours.

What the article really looks at is the appearance of IP addresses in the traces of DDoS attacks. Over a period of two years, the authors found the addresses of two million different victims, out of the 6 million servers listed on the web.

Traffic jams on the information superhighway

Units of data, called packets, circulate on the internet network. When all of these packets want to go to the same place or take the same path, congestion occurs, just like the traffic jams that occur at the end of a workday.

It should be noted that in most cases it is very difficult, almost impossible, to differentiate between normal traffic and denial of service attack traffic. Traffic generated by “Flash crowd” and “slashdot effect” phenomena is identical to the traffic witnessed during this type of attack.

However, this analogy only goes so far, since packets are often organized in flows, and the congestion on the network can lead to these packets being destroyed, or the creation of new packets, leading to even more congestion. It is therefore much harder to remedy a denial-of-service attack on the web than it is a traffic jam.

attaques

Diagram of a deny of service attack. Everaldo Coelho and YellowIcon

 

This type of attack saturates the network link that connects the server to the internet. The attacker does this by sending a large number of packets to the targeted server. These packets can be sent directly if the attacker controls a large number of machines, a botnet.

Attackers also use the amplification mechanisms integrated in certain network protocols, such as the naming system (DNS) and clock synchronization (NTP). These protocols are asymmetrical. The requests are small, but the responses can be huge.

In this type of attack, an attacker contacts the DNS or NTP amplifiers by pretending to be a server that has been attacked. It then receives lots of unsolicited replies. Therefore, even with a limited connectivity, the attacker can create a significant level of traffic and saturate the network.

There are also “services” that offer the possibility of buying denial of service attacks with varying levels of intensity and durations, as shown in an investigation Brian Krebs carried out after his own site was attacked.

What are the consequences?

For internet users, the main consequence is that the website they want to visit is unavailable.

For the victim of the attack, the main consequence is a loss of income, which can take several forms. For a commercial website, for example, this loss is due to a lack of orders during that period. For other websites, it can result from losing advertising revenue. This type of attack allows an attacker to use ads in place of another party, enabling the attacker to tap into the revenue generated by displaying them.

There have been a few, rare institutional attacks. The most documented example is the attack against Estonia in 2007, which was attributed to the Russian government, although this has been impossible to prove.

Direct financial gain for the attacker is rare, however, and is linked to the ransom demands in exchange for ending the attack.

Is it serious?

The impact an attack has on a service depends on how popular the service is. Users therefore experience a low-level attack as a nuisance if they need to use the service in question.

Only certain large-scale occurrences, the most recent being the Mirai botnet, have impacts that are perceived by a much larger audience.

Many servers and services are located in private environments, and therefore are not accessible from the outside. Enterprise servers, for example, are rarely affected by this kind of attack. The key factor for vulnerability therefore lies in the outsourcing of IT services, which can create a dependence on the network.

Finally, an attack with a very high impact would, first of all, be detected immediately (and therefore often blocked within a few hours), and in the end would be limited by its own activities (since the attacker’s communication would also blocked), as shown by the old example of the SQL Slammer worm.

Ultimately, the study shows that the phenomena of denial-of-service attacks by saturation have been recurrent over the past two years. This news is significant enough to demonstrate that this phenomenon must be addressed. Yet this is not a new occurrence.

Other phenomena, such as routing manipulation, have the same consequences for users, like when Pakistan Telecom hijacked YouTube addresses.

Good IT hygiene

Unfortunately, there is no surefire form of protection against these attacks. In the end, it comes down to an issue of cost of service and the amount of resources made available for legitimate users.

The “big” service providers have so many resources that it is difficult for an attacker to catch them off guard.

Still, this is not the end of the internet, far from it. However, this phenomenon is one that should be limited. For users, good IT hygiene practices should be followed to limit the risks of their computer being compromised, and hence used to participate in this type of attack.

It is also important to review what type of protection outsourced service suppliers have established, to ensure sure they have sufficient capacity and means of protection.

[divider style=”normal” top=”20″ bottom=”20″]

Hervé Debar, Head of Department Networks and Telecommunications services, Télécom SudParis – Institut Mines-Télécom, Université Paris-Saclay

The original version of this article (in French) was published on The Conversation.

 

cyrating

Cyrating: a trusted third-party for cybersecurity assessment

Cyrating, a startup incubating at ParisTech Entrepreneurs, provides organizations the service of assessing their performance and efficiency in cybersecurity. By positioning itself as a trust third-party, it is meeting the needs of companies for an objective analysis of their cyber risk. The service allows companies to assess their position relative to competitors.

 

In the cybersecurity sector, Cyrating intends to play a role that organizations are often asking for, but as yet has never been provided: that of a trusted third-party. The startup that has been incubating at ParisTech Entrepreneurs since last September offers to assess the cybersecurity performance of public and private companies. The rating they receive allows them to position themselves relative to their competitors, as well as define areas for improvement and determine the cybersecurity level of their subsidiaries and suppliers.

Regardless of the type of company, the startup bases its assessment on the same criteria. This results in objective ratings that are not dependent on the organization’s size or structure. “For example, we look at the level of protection for domain names, company websites, email services…” explains François Gratiolet, co-founder of Cyrating. He calls these criteria “facts” and they are supplemented by an analysis of “events” such as a data breach or the hosting of malware on the internal server.

Cyrating processes a set of observable data with the aim of uncovering these facts and events related to the organization’s cybersecurity. They are then measured against best practices in order to obtain a rating. Based on assessment algorithms, metrics and ratings are automatically calculated by category. The organizations evaluated by Cyrating therefore obtain a clear view of their efficiency in a variety of cybersecurity issues, in addition to the overall rating. This enables them to identify the measures they must immediately implement to improve their protection and optimize their allocation of financial and human resources.

Unlike auditing and consulting firms, Cyrating’s service does not require any intervention in the organizations’ departments or offices. There is no need to install any software or equipment. Furthermore, the service is based on a subscription system. The rating is ongoing throughout the entire subscription period. Therefore, as they track the changes in their rating, organizations can immediately observe the impact of their actions.

The startup is the first of its kind in Europe. And few startups are offering this type of service on a global level. “It’s a business that is booming in the United States,” says François Gratiolet. This early entry into the European market is a serious advantage for Cyrating, whose business relies on a powerful platform that can be scaled up: as the time the company has been assessing organizations increases, the more attractive their rating system becomes. The startup officially launched its business in Lille in January 2018, at the International Cybersecurity Forum (FIC)—the largest European trade show in the sector. Over the course of the startup’s development and the creation of its use cases—still very recent, since the startup is only a few months old—it has already assessed hundreds of companies. “A year from now we expect to have rated over 50,000 organizations” the co-founder predicts.

The first businesses to be won over by Cyrating’s services were large and intermediate-sized companies. “They see the opportunity to measure the performance of their suppliers and subsidiaries, and optimize their audit cycles,” François Gratiolet explains. But insurance providers could also be interested in this service, as well as agencies that want to purchase data blocks for statistical purposes. By positioning itself as a trusted third-party, the startup could quickly become a key player in cybersecurity in France and Europe.

Madeleine Besson

Institut Mines-Telecom Business School | #Industryofthefuture #DigitalTransformation

[toggle title=”Find here all her articles on I’MTech” state=”open”]

[/toggle]

Neural Meta Tracts, brain, white matter, Pietro Gori

The brain: seeing between the fibers of white matter

The principle behind diffusion imaging and tractography is exploring how water spreads through our brain in order to study the structure of neurons. Doctors can use this method to improve their understanding of brain disease. Pietro Gori, a researcher in image processing at Télécom ParisTech, has just launched a project called Neural Meta Tracts, funded by the Emergence program at DigiCosme. It aims to improve modelling, visualization and manipulation of the large amounts of data produced by tractography. This may considerably improve the analysis of white matter in the brain, and in doing so, allow doctors to more easily pinpoint the morphological differences between healthy and sick patients.

 

What is the goal of the Neural Meta Tracts project?

Pietro Gori: The project stems from my past experience. I have worked in diffusion imaging, which is a non-invasive form of brain imaging, and tractography. This technique allows you to explore the architecture of the brain’s white matter, which is made up of bundles of several millions of neuron axons. Tractography allows us to represent these bundles in the form of curves in a 3D model of the brain. It is a very rich method which provides a great deal of information, but this information is difficult to visualize and make use of in digital calculations. Our goal with Neural Meta Tracts is to facilitate and accelerate the manipulation of these data.

Who can benefit from this type of improvement to tractography?  

PG: By making visualization easier, we are helping clinicians to interpret imaging results. This may help them to diagnose brain diseases more easily. Neurosurgeons can also gain from tractography in planning operations. If they are removing a tumor, they want to be sure that they do not cut fibers in the critical areas of the brain. The more precise the image is, the better prepared they can be. As for improvements to data manipulation and calculation, neurologists and radiologists doing research on the brain are highly interested. As they are dealing with large amounts of data, it can take time to compare sets of tractographies, for example when studying the impact of a particular structure on a particular disease.

Could this help us to understand certain diseases?

PG: Yes. In psychiatry and neurology, medical researchers want to compare healthy people with sick people. This enables them to study differences which may either be the consequence or the cause of the disease. In the case of Alzheimer’s, certain parts of the brain are atrophied. Improving mathematical modeling and visualization of tractography data can therefore help medical researchers to detect these anatomical changes in the brain. During my thesis, I also worked on Tourette syndrome. Through my work, we were able to highlight anatomical differences between healthy and sick subjects.

How do you improve the visualization and manipulation of tractography data?

PG: I am working with Jean-Marc Thiery and other lecturers and researchers at Télécom ParisTech and the École Polytechnique on applying differential geometry techniques. We analyze the geometry of bundles of neuron axons, and we try to approximate them as closely as possible without losing information. We are working on algorithms which will be able to rapidly compare two sets of tractography data. When we have similar sets of data, we try to aggregate them, again trying not to lose information. It is important to realize that if you have a database of a cohort of one thousand patients, it can take days of calculation using very powerful computers to compare their tractographies in order to find averages or main variations.

Who are you collaborating with on this project to obtain the tractography data and study the needs of practitioners?

PG: We use a high-quality freely-accessible database of healthy individuals, called the Human Connectome Project. We also collaborate with clinicians in the Pitié Salpêtrière, Sainte-Anne and Kremlin-Bicêtre hospitals in the Paris region. These are radiologists, neurologists and neurosurgeons. They provide their experience of the issues with which they are faced. We are initially focusing on three applications: Tourette syndrome, multiple sclerosis, and surgery on patients with tumors.

Also read on I’MTech:

[one_half]

[/one_half]

[one_half_last]

[/one_half_last]

fine particles, Véronique Riffault, IMT Lille Douai

What are fine particles?

During peak pollution events, everyone is talking about them. Fine particles are often accused of being toxic. Unfortunately, they do not only come out during episodes of high pollution. Véronique Riffault, a researcher in atmospheric sciences at IMT Lille Douai, revisits the basics of fine particles to better understand what they are all about.

 

What does a fine particle look like?

Véronique Riffault: They are often described as spherical in shape, partly because scientists speak of diameter to describe their size. In reality, they come in a variety of shapes. When they are solid, they can indeed sometimes be spherical, but also cubic, or even made up of aggregates of smaller particles of different shapes. Some small fibers are also fine particles. This is the case with asbestos and nanotubes. Fine particles may also be liquids or semi-liquids. This happens when their chemical nature gives them a soluble character, they then dissolve when they meet droplets of water in the atmosphere.

How are they created?

VR: The sources of fine particles are highly varied, and depend on the location and the season. They may be generated directly by human processes, which are generally linked to combustion activities. This is true of residential heating using wood burning, road traffic, industry, etc. There are also natural sources: sea salt in the oceans or mineral dust in deserts, but these particles are usually bigger. Indirectly, they are also created by condensation of gases or by oxidation when atmospheric reactions make volatile organic compounds heavier. These “secondary” emissions are highly dependent on environmental conditions such as sunshine, temperature, etc.

Why do we hear about different sizes, and where does the term “PM” come from?

VR: Depending on their size, fine particles have different levels of toxicity. The smaller they are, the deeper they penetrate the respiratory system. Above 2.5 microns [1 micron = 1 thousandth of a millimeter], the particles are stopped quite effectively by the nose and throat. Below this, they go into the lungs. The finest particles even get into the pulmonary alveoli and into the bloodstream. In order to categorize them, and to establish resulting regulations, we distinguish fine particles by specific names: PM10, PM2,5, etc. The figure refers to the higher size in micron, and “PM” stands for Particulate Matter.

How can we protect ourselves from fine particles?

VR: One option is to wear a mask, but their effectiveness depends greatly on the way in which they are worn. When badly positioned, they are useless. A mask can give the wearer a sense of security when they wear them during peak pollution events. The risk is that they feel protected, and carry on doing sport, for example. This leads them to hyperventilate, which increases their exposure to fine particles. The simplest measure would be to not produce fine particles in the first place. Measures to reduce traffic can be effective if it is not just a fraction of vehicles which are immobilized. Authorities can take measures to restrict agricultural spreading. Fertilizer produces ammonia which combines with nitrogen oxides to create ammonium nitrates, which are fine particles. People also need to be made aware that they should not burn green waste, such as dead leaves and branches, in their gardens, but to take them to recycling locations, and to reduce their use of wood fire heating during peak pollution events.

Also read on I’MTech Particulate matter pollution peaks: detection and prevention

Are fine particles dangerous outside of peak pollution events?

VR: Even outside of peak pollution events, there are more particles than there should be. The only European regulation on a daily basis is for PM10 particles. For PM2,5 particles, the limit is annual: fewer than 20 micrograms per cubic meter on average. This poses two problems. The World Health Organization (WHO) recommends a threshold of 10 micrograms per cubic meter. This amount is regularly exceeded at several sites in France. The only thing helping us is that we are lucky to have an oceanic climate which brings rain. Precipitation removes the particles from the atmosphere. On average over a year, we remain below the limit, but on a daily basis we could be breathing in dangerous amounts of fine particles.

Also read on I’MTech

[one_half]

[/one_half][one_half_last]

[/one_half_last]

bitcoin, Patrick Waelbroeck

Bitcoin: the economic issues at stake

Patrick Waelbroeck, Institut Mines-Télécom (IMT)

[divider style=”normal” top=”20″ bottom=”20″]

[dropcap]C[/dropcap]ryptocurrencies like Bitcoin only have value if all the participants in the monetary system view it to as currency. It must therefore be rare, in the sense that it must not be easily copied (a problem equivalent to counterfeit banknotes for traditional currencies).

This is a requirement that is met by the Bitcoin network, which ensures no double-spending occurs. In addition to the value linked to the acceptance of the currency, Bitcoins owes its value to a variety of economic mechanisms linked to the analysis of the Bitcoins’ supply and demand.

Bitcoin supply

The issuance of currency in the primary market

The creation of Bitcoins is determined by the mining process. Each block that is mined generates Bitcoins. Their design stipulates that the amount per mined block be divided by 2 for every 210,000 blocks, to obtain a total amount of Bitcoins in circulation of (excluding those that are lost) 21 million. This monetary rule is monitored by the Bitcoin Foundation consortium, as we will discuss later in this article. The monetary rule can therefore be modified to respond to fluctuating market conditions, which can result in a hard fork.

Electricity is the main component (over 90% according to current estimates) of a mining farm’s total costs. In 2015, Böhme et al. (2015) assessed the Bitcoin network’s consumption at over 173 megawatts of electricity on a continuous basis. This represented approximately 20% of a nuclear power plant’s production and amounted to 178 million dollars per year (based on residential electricity prices in the United States). This amount may seem high, but Pierre Noizat considers that it is not any more than the annual electricity cost for the global network of ATMs (automatic teller machines), estimated at 400 megawatts. Once we figure in the costs involved in manufacturing and putting currency and bank cards into circulation, we see that the Bitcoin network’s electricity cost is not as high as it seems.

However, this cost may significantly increase as the network continues to develop, due to a negative externality inherent in mining: each miner that invests in new material increases his or her marginal revenue, but at the same time increases the overall mining cost, since the difficulty increases with the number of miners and their computation capacity (hash power).

La quête du bitcoin. xlowmiller/VisualHunt

Therefore, for the Bitcoin network, the difficulty of the cryptography problem that must be solved and approved by a proof-of-work consensus increases along with the network’s overall hash power. There is therefore a risk of over-investing in the mining capacity, since individual miners do not consider the negative effect on the entire network.

It is important to note that increasing the mining difficulty reduces mining incentives and increases the verification time, and thus the efficiency of the blockchain itself. This mechanism brings to mind the tragedy of the commons, in which shared resources (here, hash power) are depleted and only maintained by a handful of farms and pools, thereby nullifying the very principle of the public blockchain, which is decentralized.

There is therefore a risk that mining capacities will become greatly concentrated in the hands of a small group of players, thus invalidating the very principle of the blockchain. This trend is already visible today.

In the end, the supply of Bitcoins, and therefore the monetary creation on the primary market, depend on the cost of electricity and the difficulty associated with the mining process, as well as the governance rules pertaining to the Bitcoin price generated by a mined block.

The Bitcoins value on the secondary market

The Bitcoin can also be bought and sold on an exchange platform. In this case, the Bitcoin’s value is similar to a financial investment in which the financial players anticipate the prospect of financial gain and factors that could cause the Bitcoin to appreciate.

Bitcoin demand

The demand for cryptocurrency depends on several user concerns that are addressed below, starting with the positive factors and ending with the risks.

Financial privacy

Bitcoin accepted here. jurvetson on Visual Hunt

Governments are increasingly limiting the use of cash to demonstrate their efforts to counter money-laundering and the development of black markets. Cash is the only means of payment that is 100% anonymous. Bitcoin and other cryptocurrencies come in second, since the pseudonymous system used by Bitcoin effectively conceals the identity of the individuals making the transactions. Furthermore, other cryptocurrencies, such as the Zcash, go a step further, masking all the metadata linked to a transaction.

Why do people want to use an anonymous payment method? For several reasons.

First of all, this type of payment method prevents users from leaving any traces that could be used for monitoring purposes by the government, employers, and certain companies (especially banks and insurance companies). Companies and banks use price discrimination practices that can sometimes work against consumers. Leaving traces through payment can also cause companies to further incite customers to take advantage of new commercial offers and engage in targeted advertising that some see as a nuisance.

Secondly, paying with an anonymous payment method limits “sousveillance” (or inverse-surveillance) by close friends and family. Like when a payment is made using a joint account.

Thirdly, making payment under a pseudonym makes it possible to maintain business confidentiality.

Fourthly, just like the privacy policy, anonymity in certain transactions (for example healthcare products or hospital visits) helps build trust in society, and is therefore of economic value. Therefore, by enabling pseudonymity, Bitcoin brings added value in these various instances.

The Bitcoin works in times of crisis, thus avoiding capital controls

The Bitcoin emerged right after the financial crisis of 2008. This period witnessed the power of governments and central banks to control cash withdrawals and outstanding capital stock. There are very few means available for avoiding these two institutional constraints. The Bitcoin in one such means. Even if cash withdrawals are prohibited, Bitcoin owners can still pay using their private key.

The Bitcoin imposes discipline on governments

The Bitcoin (and the same is true for other cryptocurrencies) can be considered as a monetary alternative that is not controlled by a central bank. Some economists, like F. Hayek, sees these alternative currencies that compete with the official currency as a means of imposing discipline on governments that might be tempted to use inflation to finance their debt. If this happens, consumers and investors would no longer use the official currency, and would instead purchase the alternative currency, creating a deflationary pressure on the official currency.

Security-related network externalities

The level of security increases with the number of network nodes, since each node increases the computation power required to create a breach in the Blockchain security (through a 51% attack, double-spending, or denial of service–DOS). Furthermore, a DOS attack is especially hard to stage, since it is so difficult to determine who the recipient is. Positive network externalities therefore exist: Bitcoin’s value increases with the number of nodes participating in the network.

Indirect network externalities related to payment method

Bitcoin is a payment method, just like cash, debit cards and Visa/Mastercard/American Express cards. Bitcoin can therefore be understood using the multi-sided market theory, which models situations where two groups of economic players benefit from positive crossed externalities. The consumer who chooses a payment method for a purchase is happy when it is accepted by the merchant. In the same way, merchants are eager to accept a payment method that customers possess. Consequently, the dynamics of multi-sided markets result in virtuous cycles that can experience a slow inception phase, followed by a very fast deployment phase. If the Bitcoin experienced this type of phase, its value would enter a period of acceleration.

A Bitcoin bubble? duncan on Visual Hunt, CC BY-NC

The risks

Among the factors that reduce the demand for Bitcoins, the most prominent are the risks related to rules and regulations. On the one hand, a State could order that the capital gains generated from buying and selling Bitcoins be declared. On the other hand, Bitcoins can be used in regulated sectors (like the insurance and bank sectors) and their use could therefore be regulated as well. Finally, there is always the risk of losing the data on the hard drive where the private key is stored, resulting in the loss of the associated Bitcoins, or a State could force access to private keys for security reasons.

However, the greatest risk involves the governance of the Bitcoin network.

In the event of a disagreement on how the communication protocol should develop, there is a risk that the network could split into several networks (hard fork) with currencies that would be incompatible with each other. The most important issue involves the choice of the consensus rule for validating new blocks. A consensus must be reached on this consensus, which the technology itself appears unable to provide.

Conclusion

The Bitcoin’s economic value depends on many positive economic factors that could propel the cryptocurrency into a period of sustained growth, which would justify the current surge in its prices in the exchange markets. However, the risks related to the network’s governance must not be overlooked, since trust in this new currency depends on it.

Patrick Waelbroeck, Professor of Economics at Télécom ParisTech, Institut Mines-Télécom (IMT)

The original version of this article (in French) was published on The Conversation.

Also read on I’MTech:

 

Davide Balzarotti, Eurecom, ERC, Consolidator grant

A third ERC grant in 3 years at EURECOM

Getting a grant from the European Research Council is not an easy task but this is what Davide Balzarotti, Professor in the Security Department, has just accomplished. He is the third EURECOM professor to obtain an ERC grant in the past 3 years.

 

ERC, Davide Balzarotti, BITCRUMBS, EURECOM

Davide, you just got an ERC Consolidator grant, one of the most prestigious research grants in Europe. What is your feeling today?

Everybody knows it is one of the most selective grants in Europe, so I’m obviously very proud of that. It is definitely a major step in my career. It is an important recognition for the efforts I have made to get this grant and for the relevance of the project I presented. Plus, I was told there are only 329 researchers across Europe – and 38 researchers in France – who got this grant this year, so I am particularly honoured to be one of them. I am also very happy for EURECOM since it has been awarded one ERC grant every year for the past 3 years… Considering there are only 24 professors, it is a real success!

 

Will this grant change your day-to-day life as a researcher at EURECOM?

I am sure it will! In different ways even. First, I won’t have to worry about getting money for the next few years. The Consolidator grant is a five-year grant that represents €2 million. This grant is not only generous, it also offers recognition and visibility. In fact, the two other ERC grantees at EURECOM – David Gesbert & Petros Elia – explained me that I will certainly be more solicited by the research community. It will also give me a lot of independence and creative freedom to conduct the project for which I got this grant: BITCRUMBS – Towards a Reliable and Automated Analysis of Compromised Systems. I will dedicate 70% of my time to the project but I can manage it the way I want depending on the people I will work with. I actually need to hire a team of seven researchers – five Ph.D. students and two post-docs – and one engineer. On top of that, I will be involved in the EURECOM ERC committee that helps scientists benefit from the experience of the ones who already received such grants. This committee actually helped me a lot in writing my proposal, so I look forward to helping my colleagues in return.

 

BITCRUMBS seems to be a ground-breaking project in the computer security area. Could you explain its main objective?

BITCRUMBS is actually a brand new way of addressing computer security issues. And this ERC grant will help me pursue very ambitious research objectives with this project, which covers a wide range of digital security areas. I hope our results will change the way digital security will be managed in the future. The main objective of BITCRUMBS is to rethink what we call the “incident response” (IR). It is clear that research on prevention and detection helps make devices more secure, but since a 100% secure system does not exist, improving IR can be very useful too. Incident response addresses the aftermath of a digital security breach that, if not handled properly, can lead to data breach or a system collapse. We all know the risk of security breaches is now higher than ever. Attackers frequently break into corporate networks, government services and even critical infrastructures. Almost half of computers worldwide are infected by malware. A voting machine can be altered to rig the results of an election, a connected car can be hacked to drive down a cliff or a security camera can be controlled over the Internet to spy over our houses and our families. The problem is that we do not have the tools to analyze these attacks and understand their causes! This has to change.

With BITCRUMBS, I want to give investigators the possibility to quickly verify the state of compromised systems and help citizens trust the result of computer forensic investigations. In the future, I believe we should design digital systems the way we design airplanes – secure against crashes but also equipped with black boxes to collect all the data required to support an incident investigation.

 

What is your strategy to reach this objective?

I want to propose a more scientific and comprehensive methodology to analyse compromised systems. This should be done in three steps. The first part of the project will focus on measuring the effectiveness and accuracy of the techniques currently used to analyse compromised systems, and on assessing the reliability of their data sources. This will help increase the theoretical and scientific foundations of IR techniques. The second part of the project will focus on the design and implementation of new automated analysis techniques able to cope with advanced threats and the analysis of IoT devices. These techniques will have to be robust, scalable and generic – capable of working on different classes of devices. Of course, results given by these new techniques will need to be reliable and based on a solid theoretical foundation. The last step will introduce a new forensics analysis by design methodology. My goal is to provide a set of guidelines for the design of future systems and software – to help developers provide the required information to support the analysis of compromised systems.

 

What about the scientific and technological impacts?

I hope research conducted in BITCRUMBS will have a long-lasting impact – not only scientific – on the area of incident response and on the way we analyze compromised systems. First, BITCRUMBS will bring a scientific foundation to IR, based on repeatable experiments and precise measurements of the reliability of data and techniques used in current investigations. It will also have a practical impact since it will produce open source tools and improve existing software that are regularly used by companies and law enforcement to deal with computer attacks. Last but not least, BITCRUMBS will have an impact on our society. Improving the IR process will increase the trust that citizens have in the result of digital investigations. In order to clearly show the impact of BITCRUMBS in different fields and scenarios, we will address our objectives using real case studies borrowed from traditional computer software and embedded systems.

 

What are the main challenges you will be facing in BITCRUMBS?

Like any very broad project, BITCRUMBS success depends on a lot of factors. From a scientific point of view, it mainly depends on the combination of very different research skills including memory forensics, embedded systems security, malware and binary analysis, distributed systems and operating system design and defenses. I have considerable experience in each of these research areas, but in order to reduce the risks, I already secured key collaborations with leading universities and security companies so I can find research partners from different areas to work with. The other potential risk is the possible failure to develop some of the techniques I have envisioned. It is actually a very common risk in research projects that introduce novel solutions. For this reason, for each disrupting approach I would like to develop, I also have thought of less risky techniques for which I have experience and already conducted some investigation to evaluate the feasibility of a few ideas. But above all, one of the main challenges will be to find motivated postdocs in digital security willing to work in Europe. Most PhD students go to the US for their postdoc or are hired by security companies offering good conditions and interesting opportunities. I hope BITCRUMBS challenges and potential results can attract some of them.

[divider style=”dotted” top=”15″ bottom=”15″]

The original version of this article was published on EURECOM website

[divider style=”dotted” top=”15″ bottom=”15″]

Also read on I’MTech :

DessIA

DessIA: Engineering of the Future with Artificial Intelligence

What is the best architecture for the gearbox of a hybrid car? If an engineer had to answer that question, he would consider a handful of possibilities based on what already exists on the market. But the startup DessIA takes a whole different approach. Its artificial intelligence algorithms enable it to consider billions of different architectures to find the optimum configuration. The software developed by the young company digitally builds all the possible structures using the necessary components. The performance and the feasibility of the architectures built using this method are assessed, the design space is therefore intelligently explored to reduce the number of architectures physically tested. The automated, smart sorting keeps only the best architectures. In addition to the possibility of analyzing considerably more models than a human could, DessIA’s advantage is that the layouts created with its components are radically different from what already exists. “When we present our approaches to manufacturers, many of them say this is exactly the way they want to work, but they have no idea where to start,” say Pierre-Emmanuel Dumouchel and Steven Masfaraud, co-founders of the startup incubated at ParisTech Entrepreneurs.

For now, DessIA is specialized in subjects related to the transmission of mechanical power. It can work on both on gearboxes for cars and systems for transferring energy between a helicopter’s turbines and blades. The field itself is vast, and reflects the experience of its two founders, former employees of PSA. The issues can even include the mechatronic systems of complex electrically motorized mechanisms. The startup’s applications are limited to this subject because the algorithms’ work must be controlled by a thorough knowledge of the sector. Still, the two founders are not ruling out the possibility of someday moving towards providing assistance in the design of electrical or hydraulic systems. But not until a few years from now.

By remaining focused on mechanical systems, many opportunities have opened up for the young company. DessIA’s objective is to go beyond the mere optimization of architectures. Once the best structure has been determined, the ideal solution would be to have a very simple way of obtaining a 2D industrial plan, or even the 3D CAD model to directly integrate into the computer aided design software. The two founders intend to achieve this outcome by the end of 2018. If they succeed, they could redefine how mechanical systems are designed at the industrial level, from the reflection phases to drawing the part.

 

[divider style=”normal” top=”20″ bottom=”20″]

Pierre-Emmanuel Dumouchel worked at PSA for 10 years. After supervising Steven Masfaraud’s thesis for three years, they decided to partner together to create DessIA. They aim to simplify the design process for engineers through a breakthrough approach based on artificial intelligence.

[divider style=”normal” top=”20″ bottom=”20″]