Risks, La confiance, un outil pour réduire le risque technologique ?

Trust, a tool for reducing technological risks?

This article is part of our series on trust, published on the occasion of the release of the Fondation Mines-Télécom brochure: “The new balances of trust: between algorithms and social contract.

[divider style=”normal” top=”20″ bottom=”20″]

A sociologist with IMT Atlantique, Sophie Bretesché is specialized in the risks associated with technology. She has worked extensively on the issue of redeveloping former uranium mines in mainland France. She believes that trust between the stakeholders at each mine site is essential in better preventing risks. This emblematic case study is presented at a time when technological development seems out of control.

“Private property.” Those who enjoy walking in the woods are very familiar with this sign. Hanging there on a metal fence, it dashes any hopes of continuing further in search of mushrooms. Faced with this adversity, the walker will turn around, perhaps imagining that a magnificent forgotten castle lies hidden behind the fence. However, the reality is sometimes quite different. Although privately owned forest land does exist, some forests are home to former industrial sites with a heavy past, under the responsibility of operating companies. In Western France, for example, “private property” signs have replaced some signs that once read “radioactive waste storage.”

Inconvenient witnesses of the uranium mines that flourished in the territory from 1950 to the 1990s, these signs were removed by the private stakeholders in charge of operating and rehabilitating these sites. Forgetfulness is often the tool of choice for burying the debate on the future of this local heritage. Yet seeking to hide the traces of the uranium operations is not necessarily the best choice. “Any break in activities must be followed by historical continuity,” warns Sophie Bretesché, sociologist with IMT Atlantique and director of the Emerging risks and technology. In the case of the uranium mines, the stakeholders have not taken the traces left behind by the operations into account: they simply wanted them to be forgotten at a time when nuclear energy was controversial. The monitoring system carried out using measuring instruments was implemented on the sites without any continuity with the land’s history and past. According to the researcher, the problem associated with this type of risk management is that it can lead to “society mistrusting the decisions that are taken. Without taking the past and history into account, the risk measurements are meaningless and the future is uncertain.

This mistrust is only reinforced by the fact that these former mines that are not without risks. On certain sites, water seepage has been observed in nearby farming fields. There have also been incidents of roads collapsing, destroying trucks. When such events take place, citizens voice their anger, begin to take action, and initiate counter inquiries that inevitably bring to light the land’s uranium-linked past. Development projects for the sites are initiated without taking the presence of these former uranium mines into account. Locally, relations between public authorities, citizens and managing companies deteriorate in the absence of an agreement on the nature of the heritage and impacts left behind by uranium mining. The projects are challenged and sometimes the issues take several years to be resolved.

 

Trust for better risk prevention

There are, however, instances in which the mining heritage was taken into account from the start. Sophie Bretesché takes the example of a former site located 30 km from Nantes. When the question came up concerning “sterile” or waste rock—rocks that were extracted from the mines but contained amounts of uranium that were too low for processing—citizens from the surrounding areas were consulted. At their request, a map was created of the locations linked to the mining industry, explaining the history of the industry, and identifying the locations where waste rock is still stored. “New residents receive a brochure with this information,” the sociologist explains. “Though they could have tried to sweep this past under the rug, they chose transparency, clearly informing newcomers of the locations linked to the region’s mining history.

This example is emblematic of the role local culture can play in taking these risks into account. In the case of this mine, research was initiated to report on the site’s past. This initiative, carried out with former miners, elected officials, and environmental organizations, made it possible to write the site’s history based on the citizens’ knowledge. In order to prevent “history from stuttering”, the researcher followed the mining operations from the initial exploration phase to the present day. “It’s another way of addressing the risks,” Sophie Bretesché explains. “It allows citizens to get involved by accepting their local knowledge of the sites, and raises the issue of heritage in general. It’s a different way of conducting research by developing participatory science programs.

From an institutional standpoint, trust is established when the various economic stakeholders involved in a mine site accept to work together. Sophie Bretesché again takes this emblematic example of the former uranium mine bordering a quarry. The Local Information and Monitoring Commission (CLIS), chaired by the mayor of the municipality where the site was located, brought together the former site operator and the quarry operator. “The two industrialists take turns presenting the findings related to their activities. More than an institutional monitoring program, this initiative results in vigilance at the local level. This continuity with the industrial past, maintained by the presence of the quarry, is what enables this,” she explains. “The debate becomes more positive and results in better regulation throughout the territory.

 

Unpredictable technology

The trust factor is all the more crucial given the unpredictable nature of the risks related to science and technology. “The mines are emblematic of the difference in time-scales, between the use of technology and the management of its consequences,” Sophie Bretesché observes. Over a 40-year period, uranium mines sprung up across mainland France and then shut down. Yet the consequences and risks from the creation of these mines will continue for hundreds of years. Transparency and the transmission of information, already important in ensuring unity among the population at a given time, will become even more important in ensuring the population’s future resilience in light of risks spanning several generations.

In this regard, the mines are far from being an isolated example. The work Sophie Bretesché’s chair has conducted at IMT Atlantique is full of examples. “Email is a form of technology that is integrated into companies, yet it took society 30 years to realize it could have negative impacts on organizations,” she points out, in passing. While uranium mines and email inboxes are very different things, they both must adopt the same approach to more effectively preventing risks. In both cases, “the communities that are directly exposed to the risks must be taken into account. Local culture is vital in reducing risks, it must not be left out.”

 

Blockchain, trust, technology

Is blockchain the ultimate technology of trust?

This article is part of our series on trust, published on the occasion of the release of the Fondation Mines-Télécom booklet: “The new balances of trust: between algorithms and social contract.

[divider style=”normal” top=”20″ bottom=”20″]

Due to its decentralized nature, blockchain technology is fueling hopes of developing a robust trust system between economic stakeholders. But although it offers unique advantages, it is not perfect. As with any technology, the human factor must be considered. That factor alone warrants caution in considering the hype surrounding the blockchain, and vigilance regarding the arbitrations taking place between policymakers.

 

The blockchain is a tool of government distrust.” According to Patrick Waelbroeck, an economist with Télécom ParisTech, it is no coincidence that this technology was first introduced in 2008. In the context of the global financial crisis, the release of the first bitcoin blockchain testified to citizens’ loss of trust in a state monetary management system. It must be said that the past decade has been particularly hard for centralized financial organizations, in which transactions are controlled by institutions. In Greece, residents had daily ATM withdrawal limits of a few dozen euros. In India, bartering is now preferred over unstable currency.

In light of these risks, the blockchain solution is viewed as a more trustworthy alternative. Based on a decentralized framework, it theoretically offers greater security and transparency (read our article What is a Blockchain?). Furthermore, blockchains generating cryptocurrency offer parallel solutions for financial transactions. “Friedman and Hayek, both winners of the Nobel Prize in Economics, were supportive of alternative currencies such as this, because they offered the only means of preventing governments from financing their debts via inflation,” Patrick Waelbroeck points out. In the event of inflation, i.e. a drop in the value of the state currency, citizens would turn to alternative forms of currency.

In addition to this safe-guard aspect, blockchain technology, by its very nature, prevents falsifications and theft. It therefore naturally appears to be a solution for restoring trust among economic stakeholders, and resolving one of the major problems currently facing the markets: information asymmetry. “We are living in the age of the ‘black box’ society, in which information is difficult to access,” Patrick Waelbroeck observes. “Visibility has been lost regarding market realities. We have an example of this in the digital economy: users do not know what is done with their personal data.” An inquiry led by the Personal Data Values and Policy Chair in cooperation with Médiamétrie shows an increase in distrust among internet users. “The risk is that consumers could withdraw from these markets, resulting in their disappearance,” the economist explains.

And as the icing on the cake, the most commonly used public blockchains, such as Ethereum and Bitcoin, have what economists refer to as a positive network externality. This means that the more users a blockchain has, the more robust it is. Scaling up therefore provides greater security, and the users’ increased trust in the technology.

 

Humans behind the blockchain

The hopes for the blockchain are therefore great, and rightly so. Yet they must not cause us to forget that, as with any form of technology, it is developed and overseen by humans. This is a key point in understanding the technology’s limits, and a reason for vigilance that must kept in mind. Private blockchains have the advantage of allowing several tens of thousands of transactions to take place per second, because they are not as strict in verifying each transaction as the large public blockchains, like Bitcoin, are. But they are managed by consortiums that define the rules, often in small groups. “These groups can, overnight, decide to change the rules,” Patrick Waelbroeck warns. “Therefore, the question is whether people can make a credible commitment to the rules of governance.”

Public blockchains, too, are not without their faults, inherent in the human character of their users. Ethereum is becoming increasingly successful due to its “smart-contracts” that it allows to be anchored into its blockchain by having them certified by the network of users. Recently, an instance of smart-contract abuse was reported, in which a hacker utilized mistakes in a contract to receive the equivalent of several tens of thousands of euros. The architecture of blockchain is not at issue: a human mistake made when the contract was drafted is clearly what allowed this malicious use to occur.

The Ethereum users see smart-contracts as an option for standardizing the uses of this blockchain, by including ethical constraints, for example. “The idea being that if a smart-contract is used for illegal activity, the payment could be blocked,” Patrick Waelbroeck summarizes. But this perspective gives rise to many questions. First of all, how can ethics be translated into an algorithm? How can it be defined and must it be “frozen” in code? Given the fact that we do not all have the same visions of society, how can we reach a consensus? Even the very method for defining ethical rules appears to spark a debate. Not to mention that defining ethical rules implies making the technology accessible to some people and not others. From an ethical standpoint, is it possible to grant or prevent access to this technology based on the values the user upholds?

 

Trust and punishment

The operation of large public blockchains, though it may look simple, is built on complex human factors. “Little by little, communities are realizing that despite decentralization, they need trusted human third parties, called ‘oracles,’” the researcher explains. The oracles represent an additional type of actor in the blockchain organization, which increases the structure’s complexity. Their role is not the only one that has been born out of necessity. The difficulty beginners face in accessing this technology has led to the creation of intermediaries. “Teller” roles have emerged, for example, to facilitate the access of novices and their resources to the various registers.

The emergence of these new roles shows just how imponderable the human factor is in the development of blockchains, regardless of their size or how robust they are. Yet the more roles there are, the more fragile the trust in the system becomes. “For technology to be just and fair, everyone involved must play the part they are supposed to play,” Patrick Waelbroeck points out. “In a centralized system, the manager ensures that things are going well. But in a decentralized technological system, the technology itself must guarantee these aspects, and include a punitive aspect if the tasks are not carried out correctly.

From a legal standpoint, private blockchains can be regulated, because the responsible parties are identifiable. A punitive system ensuring users are not taken advantage of is therefore possible. On the other hand, the problem is very different for public blockchains. “Due to their international nature, it is unclear who is responsible for these block chains,” the economist points out. And without any established responsibility, what kind of assurances can users have regarding the strategic decisions made by the foundations in charge of the blockchains? Far different from the myths and candid hopes, the issue of trust is far from resolved regarding the blockchain. One thing is sure, in the future, this question will give rise to many debates at various levels of government in an attempt to better supervise this technology.

Trust, Armen Khatchatourov

Trust and identity in the digital world

This article is part of our series on trust, published on the occasion of the release of the Fondation Mines-Télécom booklet: “The new balances of trust: between algorithms and social contract.

[divider style=”normal” top=”20″ bottom=”20″]

What does “trust” mean today? Technology is redefining the concept of trust by placing the issue of identity transparency in the forefront. But according to Armen Khatchatourov, a philosopher with Télécom École de Management and member of the IMT chair Values and policy of personal information, the question is much deeper, and this approach is problematic. It reveals how trust is reduced to mere risk management in our social interactions. This new perception is worth challenging, since it disregards the complexity of what trust truly is.

 

In our digital societies, the issue of trust seems closely linked to identity transparency. In order for us to trust something, must we know who is behind the screen?

Armen Khatchatourov: It seems to me that reducing trust to an issue of identity is very simplistic. This association is increasingly present in our society, and bears witness to a shift in meaning. Trust would mean being able to verify the identity of those we interact with on digital networks—or at least their legitimacy—through institutional validation. This could mean validation by a government system or via technology. Yet this offers a view of trust that is solely an assessment of the risk taken in the context of interaction, and suggests that increasing trust is, above all, a matter of reducing risks. Yet in my opinion, this approach is very simplistic in that trust is not such a uniform concept; it includes many other aspects.

 

What are the other aspects involved in trust? 

AK: Work by sociologist Niklas Luhmann shows that there is another form of trust that is rooted in the interactions with those who are close to us. This typically means trust in the social system as a whole, or the trust we build with our friends or family. A child trusts his or her parents for reasons that are unrelated to any risk calculation. Luhmann used two different words in English—“trust” and “confidence”—which are both translated by one word in French: “confiance”. According to Luhmann, this nuance represented a reality: “trust” can be used to describe the type of assurance that, in its most extreme form, can be described as pure risk management, whereas “confidence” is more related to social interactions with those close to us, a type of attachment to society. However, things do not seem as consistent when we consider that both terms can apply to the same relationship. I would tend to use “confidence” in describing the relationship with my friends. But if I decide to create a startup with them, what I experience would be more appropriately described as “trust”. The opposite can also be true, of course, when repeated interactions lead to an attachment to a social system.

 

Does the idea of “trust” take precedence over the concept of “confidence”?

AK: Unfortunately, the difference between these two terms related to trust tends to be lost in our society, and there is a shift towards one standardized concept. We increasingly define trust as recommendations that are combined to form a rating on an application or service, or as a certification label. Economic theory has thematized this in the concept of information asymmetry reduction. Here we see the underlying conceptual framework and the primarily economic notion of risk it is associated with. Incidentally, this form of “trust” (as opposed to “confidence”) is based on opaque mechanisms. Today there is an algorithmic aspect that we are not aware of in the recommendations we receive. The mechanism for establishing this trust is therefore completely different from the way we learn to trust a friend.

 

So, is identity transparency a non-issue in the discussion on trust?

AK: Some people are reluctant to embrace pseudonymity. Yet a pseudonym is not a false identity. It is simply an identity that is separate from our civil identity, as defined by our identity card. In a sense, you have a sort of pseudonym in all traditional social relationships. When you meet someone in a bar and you develop a friendly or romantic relationship, you do not define yourself according to your civil identity, and you do not show your ID. Why should this be different for digital uses?

 

Aren’t there instances where it remains necessary to verify the identity of the individuals we are interacting with?

AK: Yes of course. When you buy or sell a house you go through a notary, who is a trusted third party. But this is not the issue. The real problem is that we increasingly have a natural tendency to react with an attitude of distrust. Wondering about the identity of the person offering a ride on the Blablacar carpooling website illustrates this shift: no one who is hitchhiking would ask the driver for his or her ID. What didn’t seem to pose a problem a few years ago has now become problematic. And today it is unheard of to say that transparency is not necessarily a sign of confidence, yet this is precisely the kind of issue we should be discussing.

 

Why should this shift be challenged?

AK: Here we need to look at the analysis, the approach at the heart of philosopher Michel Foucault’s work, of things that seemed to go without saying at a given time in history, from underlying mechanisms to representations accepted as essential components. He particularly examined the transition from one construction to another, the historical evolution. We are likewise in the midst of a new system, in which something like a “society” is attainable via social interactions. This shift in the themes of identity and trust bears witness to the changes taking place in society as a whole, and the changes in social connections. And this is not simply a matter of risk management, security, or economic efficiency.

 

Isn’t this identity-focused trust crisis contradictory in a context of personal data protection, which is increasingly necessary for new digital services?

AK: Yes, it is, and it’s a contradiction that illustrates the strains on the notion of identity. On the one hand, we are required to provide data to optimize services and demonstrate to other users that we are trustworthy users. On the other hand, there is an urge to protect ourselves, and even to become withdrawn. These two movements are contradictory. This is the complexity of this issue: there is no one-way, once-and-for-all trend. We are torn between, on the one side, a requirement and desire to share our personal data—desire because Facebook users enjoy sharing data on their profiles—and, on the other side, a desire and requirement to protect it—requirement because we are also driven by institutional discourse. Of course, my position is not against this institutional discourse. GDPR comes to mind here, and it is, as we speak, most welcome, as it provides a certain level of protection for personal data. However, it is important to understand the broader social trends, among which the institutional discourse represent only one element. These tensions surrounding identity inevitably impact the way we represent trust.

 

How does this affect trust?

AK: The chair Values and policy of personal information that I am a part of led an extensive inquiry with Médiamétrie on these issues of trust and personal data. We separately assessed users’ desires to protect their data, and their sense of powerlessness in doing so. The results show a sense of resignation among approximately 43% of those questioned. This part of the inquiry is a replication of a study carried out in 2015 in the United States by Joseph Turow and his team, in which they obtained results of a sense of resignation among 58% of respondents. This resignation results in individuals providing personal information not to gain an economic advantage, but rather because they feel it is unavoidable. These results inevitably raise the question of trust in this report. This is clearly an attitude that contradicts the assumptions some economists have made that the act of providing personal data is solely motivated by a cost-benefit balance the individual can gain from. This resignation reveals the tension that also surrounds the concept of trust. In a way, these users are neither experiencing trust nor confidence.

 

Celtic-Plus Awards, Eureka

Three IMT projects receive Celtic-Plus Awards

Three projects involving IMT schools were featured among the winners at the 2017 Celtic-Plus Awards. The Celtic-Plus program is committed to promoting innovation and research in the areas of telecommunications and information technology. The program is overseen by the European initiative Eureka, which seeks to strengthen the competitiveness of industries as a whole.

 

[box type=”shadow” align=”” class=”” width=””]

SASER (Safe and Secure European Routing) :
Celtic-Plus Innovation Award

The SASER research program brings together operators, equipment manufacturers and research institutes from France, Germany and Finland. The goal of this program is to develop new concepts for strengthening the security of data transport networks in Europe. To achieve this goal, the SASER project is working on new architectures, specifically by imagining networks that integrate, or are distributed, via the latest technological advances in cloud computing and virtualization. Télécom ParisTech, IMT Atlantique and Télécom SudParis are partners in this project led by the Nokia group.

[/box]

[box type=”shadow” align=”” class=”” width=””]

NOTTS (Next Generation Over-The-Top Multimedia Services) :
Excellence Award for Services and Applications

NOTTS seeks to resolve the new problems created by over-the-top multimedia services. These services, such as Netflix and Spotify, are not controlled by the operators and affect the internet network. The project proposes to study the technical problems facing the operators, and seek solutions for creating new business models that would be agreeable for all parties involved. It brings together public and private partners from 6 different countries: Spain, Portugal, Finland, Sweden, Poland, and France, where Télécom SudParis is based.

[/box]

[box type=”shadow” align=”” class=”” width=””]

H2B2VS (HEVC Hybrid Broadcast Broadband Video Services) :
Excellence Award for Multimedia

New video formats such as ultra-HD and 3D test the limits of broadcasting networks and high-speed networks. Both networks have limited bandwidths. The H2B2VS project aims to resolve this bandwidth problem by combining the two networks. The broadcasting network would transmit the main information, while the high-speed network would transmit additional information. H2B2VS includes industrialists and public research institutes in France, Spain, Turkey, Finland and Switzerland. Télécom ParisTech is part of this consortium.

[/box]

Two IMT projects also received awards at the 2016 Celtic-Plus Awards.

Fine particulate pollution: can we trust microsensor readings?

Nathalie RedonIMT Lille Douai – Institut Mines-Télécom

Last May, Paris City Hall launched “Pollutrack”: a fleet of micro sensors placed on the roofs of vehicles traveling throughout the capital to measure the amount of fine particles present in the air in real-time. A year before, Rennes proposed that residents participate in assessing the air quality via individual sensors.

In France, for several years, high concentrations of fine particles have been regularly observed, and air pollution has become a major health concern. Each year in France, 48,000 premature deaths are linked to air pollution.

The winter of 2017 was a prime example of this phenomenon, with daily levels reaching up to 100µg/m3 in certain areas, and with conditions stagnating for several days due to the cold and anticyclonic weather patterns.

 

A police sketch of the fine particle

A fine particle (particulate matter, abbreviated PM) is characterized by three main factors: its size, nature and concentration.

Its size, or rather its diameter, is one of the factors that affects our health: the PM10 have a diameter ranging from 2.5 to 10μm; PM2.5, a diameter less than 2.5μm. By way of comparison, one particle is approximately 10 to 100 times finer than a hair. And this is the problem: the smaller the particles we inhale, the more deeply they penetrate the lungs, leading to an inflammation of the lung alveoli, as well as the cardiovascular system.

The nature of these fine particles is also problematic. They are made up of a mixture of organic and mineral substances with varying degrees of danger: water and carbon form the base around which condense sulfates, nitrates, allergens, heavy metals and other hydrocarbons with proven carcinogenic properties.

As for their concentration, the greater it is in terms of mass, the greater the health risk. The World Health Organization recommends not to exceed personal exposure of 25 μg/m3 for the PM2.5 as a 24-hour average and 50 μg/m3 for the PM10. In recent years, thresholds have been constantly exceeded, especially large cities.

 

particules fines

The website for the BreatheLife campaign, created by WHO, where you can enter the name of a city and find out its air quality. Here, the example of Grenoble is given.

 

Humans are not the only ones affected by the danger of these fine particles: when they are deposited, they contribute to the enrichment of natural environments, which can also lead to an eutrophication, phenomena, meaning excess amounts of nutriments, such as the nitrogen carried by the particles, are deposited in the soil or water. For example, this leads to algal blooms that can suffocate local ecosystems. In addition, due to the chemical reaction of the nitrogen with the surrounding environment, the eutrophication generally leads to soil acidification. Soil that is more acidic becomes drastically less fertile: vegetation becomes depleted, and slowly but inexorably, species die off.

 

Where do they come from?

Fine particle emissions primarily originate from human activities: 60% of PM10 and 40% of PM2.5 are generated from wood combustion, especially from fireplace or stove heating, 20% to 30% originate from automotive fuel (diesel is the number one). Finally, nearly 19% of national PM10 emissions, and 10% PM2.5 emissions result from agricultural activities.

To help public authorities limit and control these emissions, the scientific community must improve the identification and quantification of these sources of emissions, and must gain a better understanding of their spatial and temporal variability.

 

Complex and costly readings

Today, fine particle readings are primarily based on two techniques.

First, samples are taken from filters; these are taken after an entire day and are then analyzed in a laboratory. Aside from the fact that the data is delayed, the analytical equipment used is costly and complicated to use; a certain level of expertise is required to interpret the results.

The other technique involves making measurements in real time, using tools like the Multi-wavelength Aethalometer AE33, a device that is relatively expensive, at over €30,000, but has the advantage of providing measurements every minute or even under a minute. It is also able to monitor black carbon (BC): it can identify the particles that originate specifically from combustion reactions. The aerosol chemical speciation monitor (ACSM) is also worth mentioning, as it makes it possible to identify the nature of the particles, and takes measurements every 30 minutes. However, its cost of €150,000 means that access to this type of tool is limited to laboratory experts.

Given their cost and level of sophistication, there are a limited number of sites in France that are equipped with these tools. Thanks to these simulations, the analysis of daily averages makes it possible to create maps with a 50km by 50km grid.

Since these means of measurement do not make it possible to establish a real-time map with finer spatio-temporal scales—in terms of the km2 and minutes—the scientists have recently begun looking to new tools: particle microsensors.

 

How do microsensors work?

Small, light, portable, inexpensive, easy to use, connected… microsensors appear to offer many advantages that complement the range of heavy analytical techniques mentioned above.

But how credible are these new devices? To answer this question, we need to look at their physical and metrological characteristics.

At present, several manufactures are competing for the microsensor market: the British Alphasense, the Chinese Shinyei and the American manufacturer, Honeywell. They all use the same measurement method: optical detection using a laser diode.

The principle is simple: the air, sucked in by the fan, flows through the detection chamber, which is configurated to remove the larger particles, and retain only the fine particles. The air, loaded with particles, flows through the optical signal emitted by the laser diode, the beam of which is diffracted by a lens.

A photodetector placed opposite the emitted beam records decreases in luminosity caused by the passing particles, and counts the number by size ranges. The electrical signal from the photodiode is then transmitted to a microcontroller that processes the data in real time: if the air flow rate is known, the concentration number can then be determined, and then the mass, based on the size ranges, as seen in the figure below.

 

An example of a particle sensor (brand: Honeywell, HPM series)

 

From the most basic to the fully integrated version (including acquisition and data processing software, and measurement transmission via cloud computing), the price can range from €20 to €1,000 for the most elaborate systems. This is very affordable, compared to the techniques mentioned above.

 

Can we trust microsensors?

First, it should be noted that these microsensors do not provide any information on the fine particles’ chemical composition. Only the techniques described above can do that. However, knowledge of the particles’ nature provides information about their source.

Furthermore, the microsensor system used to separate particles by size is often rudimentary; field tests have shown that while the finest particles (PM2.5) are monitored fairly well, it is often difficult to extract the PM10 fraction alone. However, the finest particles are precisely what affect our health the most, so this shortcoming is not problematic.

In terms of the detection/quantification limits, when the sensors are new, it is possible to reach reasonable thresholds of approximately 10µg/m3. They also have sensitivity levels between 2 and 3µg/m3 (with an uncertainty of approximately 25%), which is more than sufficient for monitoring the dynamics of how the particle concentrations change in the concentration range of up to 200µg/m3.

However, over time, the fluidics and optical detectors of these systems tend to become clogged, leading to errors in the results. Microsensors must therefore be regularly calibrated by connecting them to reference data, such as the data released by air pollution control agencies.

This type of tool is therefore ideally suited for an instantaneous and semi-quantitative diagnosis. The idea is not to provide an extremely precise measurement, but rather to report on the dynamic changes in particulate air pollution on a scale with low/medium/high levels. Due to the low cost of these tools, they can be distributed in large numbers in the field, and therefore help provide a better understanding of particulate matter emissions.

 

Nathalie Redon, Assistant Professor, Co-Director of the “Sensors” Laboratory, IMT Lille Douai – Institut Mines-Télécom

This article was originally published in French on The Conversation.

 

qualité de l'air, modélisation, air quality, modeling

Air quality: several approaches to modeling the invisible

The theme day on air quality modeling (organized by FIMEA and IMT Lille Douai) on June 8 provided an opportunity for researchers in this field to exchange on existing methods. Modeling makes it possible to identify the link between pollution sources and receptors. These models help provide an understanding of atmospheric processes and air pollution prevention.

 

What will the pollution be like tomorrow? Only one tool can provide answer: modeling. But what is modeling? It all depends the area of expertise. In the field of air quality, this method involves creating computer simulations to represent different scenarios. For example, it enables pollutant emissions to be simulated before building a new highway. Just as meteorological models predict rain, an air quality model predicts pollutant concentrations. Modeling also provides a better understanding of the physical and chemical reactions that take place in the atmosphere. “There are models that cover smaller and larger areas, which make it possible to study the air quality for a continent, region, or even for one street,” explains Stéphane Sauvage, a researcher with the Atmospheric Sciences and Environmental Engineering Department (SAGE) at IMT Lille-Douai. How are these models developed?

 

Models, going back to the source

The first approach involves identifying the sources that emit the pollutants via field observations, an area of expertise at IMT Lille-Douai. Sensors located near the receptors (individuals, ecosystems) measure the compounds in the form of gas or particles (aerosols). The researchers refer to certain types that are detected as tracers, because they are representative of a known source of emissions. “Several VOC (Volatile Organic Compounds) are emitted by plants, whereas other kinds are typical of road traffic. We can also identify an aerosol’s origin (natural, wood combustion…) by analyzing its chemical composition,” Stéphane Sauvage explains.

The researchers study the hourly, daily, and seasonal variability of the tracers through statistical analysis. These variations are combined with models that trace the path air masses followed before reaching the observation site. “Through this temporal and spatial approach, we can succeed in reproducing the potential areas of origin. We observe ‘primary’ pollutants, which are directly emitted by the sources, and are measured by the receptors. But secondary pollutants also exist; the result of chemical reactions that take place in the atmosphere,” the researcher adds. To identify the sources of this second category of pollutants, researchers identify the reactions that could possibly take place between chemical components. This is a complex process, since the atmosphere is truly a reactor, within which different species are constantly being transformed. Therefore, the researchers come up with hypotheses to enable them to find the sources. Once these models are functional, they are used as decision-making tools.

 

Models focused on receptors

A second approach, referred to as the “deterministic” modeling, is focused on the receptors. Based on what they know about the sources (concentrations of industrial waste and of road traffic…), the researchers use air mass diffusion and movement models to visualize the impact these emissions have on the receptor. To accomplish this, the models integrate meteorological data (wind, temperature, pressure…) and the equations of the chemical reactions taking place in the atmosphere. These complex tools require a comprehensive knowledge of atmospheric processes and high levels of computing power.

These models are used for forecasting purposes. “air pollution control agencies use them to inform the public of the levels of pollutants in a given area. If necessary, the prefecture can impose driving restrictions based on the forecasts these models provide,” explains Stéphane Sauvage. This modeling approach also makes it possible to simulate environmental impact assessments for industrial sites.

 

Complementary methods

Both methods have their have limits and involve uncertainties. The models based on observations are not comprehensive. “We do not know how to observe all the species. In addition, this statistical approach requires a large amount of observations to be made before a reliable and robust model can be developed. The hypotheses used in this approach are simplistic compared to the receptor-focused models,” Stéphane Sauvage adds. The other type of model also relies on estimations. It uses data that can be uncertain, such as the estimation of the sources’ emissions and the weather forecasts.

We can combine these two methods to obtain tools that are more effective. The observation-based approaches make it possible to assess information about the sources, which is useful for the deterministic models. The deterministic models are validated by comparing the predictions with the observations. But we can also integrate the observed data into the models to correct them,” the researcher adds. This combination limits the uncertainties involved and supports the identification of links between the sources and receptors. The long-term objective is to propose decision-making tools for policies aimed at effectively reducing pollutants.

 

Patrick Waelbroeck

Patrick Waelbroeck

Télécom Paris | #blockchain #PersonalData #chairVPIP

[toggle title=”Find all his articles on I’MTech” state=”open”]

[/toggle]

Jean-Luc Dugelay, Biométrie, smartphone, biométrics, iris recognition

Iris recognition: towards a biometric system for smartphones

Smartphones provide a wide range of opportunities for biometrics. Jean-Luc Dugelay and Chiara Galdi, researchers at Eurecom, are working on a simple, rapid iris recognition algorithm for mobile phones, which could be used as an authentication system for operations such as bank transactions.

 

Last name, first name, e-mail address, social media, photographs — your smartphone is a complete summary of your personal information. In the near future, this extremely personal device-tool could even take on the role of digital passport. A number of biometric systems are being explored in order to secure access to these devices. Facial, digital, or iris recognition have the advantage of being recognized by the authorities, making them more popular options, even for research. Jean-Luc Dugelay is a researcher specialized in image processing at Eurecom. He is working with Chiara Galdi to develop an algorithm designed especially for iris recognition on smartphones. The initial results of the study were published in May 2017 in Pattern Recognition Letters. Their objective? Develop an instant, easy-to-use system for mobile devices.

 

The eye: three components for differentiating between individuals

Biometric iris recognition generally uses infrared light, which allows for greater visibility of the characteristics which differentiate one eye from another. “To create a system for the general public, we have to consider the type of technology people have. We have therefore adopted a technique using visible light so as to ensure compatibility with mobile phones,” explains Jean-Luc Dugelay.

 

oeil, spot, biométrie

Examples of color spots

 

The result is the FIRE (Fast Iris REcognition) algorithm, which is based on an evaluation of three parameters of the eye: color, texture, and spots. In everyday life, eye color is approximated by generic shades like blue, green or brown. In FIRE, it is defined by a colorimetric composition diagram. Eye texture corresponds to the ridges and ligaments that form the patterns of the iris. Finally, spots are the small dots of color within the iris. Together, these three parameters are what make the eyes of one individual unique from all others.

 

FIRE methodology and validation

When images of irises from databases were used to test the FIRE algorithm, variations in lighting conditions from different photographs created difficulties. To remove variations in brightness, the researcher applied a technique to standardize the colors. “The best-known color space is red-green-blue (RGB) but other systems exist, such as LAB. This is space where color is expressed according to the lightness ‘L,’ and A and B, which are chromatic components. We are focusing on these last two aspects rather than the overall definition of color, which allows us to exclude lighting conditions,” explains Jean-Luc Dugelay.

An algorithm then carries out an in-depth analysis of each of the three parameters of the eye. In order to compare two irises, each parameter is studied twice: once on the eye being tested, and once on the reference eye. Distance calculations are then established to represent the degree of similarity between the two irises. These three calculations result in scores which are then merged together by a single algorithm. However, the three parameters do not have the same degree of reliability in terms of distinguishing between two irises. Texture is a defining element, while color is a less discriminating characteristic. This is why, in merging the scores to produce the final calculation, each parameter is weighted according to how effective it is in comparison to the others.

 

Authentication, identification, security and protocol

This algorithm can be used according to two possible configurations which determine its execution time. In the case of authentication, it is used to compare the dimensions of the iris with those of the person who owns the phone. This procedure could be used to unlock a smartphone or confirm bank transactions. The algorithm gives a result in one second. However, when used for identification purposes, the issue is not knowing if the iris is your own, but rather to whom it corresponds. The algorithm could therefore be used for identity verification purposes. This is the basis for the H2020 PROTECT project in which Eurecom is taking part. Individuals would no longer be required to get out of their vehicles when crossing a border, for example, since they could identify themselves at the checkpoint from their mobile phones.

Although FIRE has successfully demonstrated that the iris recognition technique can be adapted for visible light and mobile devices, protocol issues must still be studied before making this system available to the general public. “Even if the algorithm never made mistakes in visible light, it would also have to be proven to be reliable in terms of performance and strong enough to withstand attacks. This use of biometrics also raises questions about privacy: what information is transmitted, to whom, who could store it etc.,” adds Jean-Luc Dugelay.

Several prospects are possible for the future. First of all, the security of the system could be increased. “Each sensor associated with a camera has a specific noise which distinguishes it from all other sensors. It’s like digital ballistics. The system could then verify that it is indeed your iris and in addition, verify that it is your mobile phone based on the noise in the image. This would make the protocol more difficult to pirate,” explains Jean-Luc Dugelay. Other possible solutions may emerge in the future, but in the meantime, the algorithmic portion developed by the Eurocom team is well on its way to becoming operational.