Cahier de veille, Fondation Mines-Télécom, confiance numérique, trust

Intelligence booklet: the new balances of trust, between algorithms and social contract

The 9th Fondation Mines-Télécom booklet addresses the major cross-cutting issue of trust in the digital age. The booklet was put together over the course of a series of events held in partnership with NUMA Paris, with support from the Foundation’s major corporate partners. It examines the changing concepts of trust, while highlighting existing research in this area from IMT schools.

 

Cybersecurity, blockchain, digital identities… In 27 pages, the booklet offers perspective on these topics that are so widely covered by the media, examining the foundation of our interactions and practices: trust, a concept that is currently undergoing tremendous changes.

The first section examines the concept of trust and provides a multidimensional definition. The booklet differentiates the term “confidence”, a form of trust related to the social context we live in, from “trust” that is decided at the individual level. This second form of trust is becoming predominant in the digital economy: trust tends to be reduced to a calculation of risk, to the detriment of our capacity to learn how to trust.

In the second section, the booklet examines the transformation of trust in the digital age. It offers a presentation of the blockchain by introducing the related concepts of protocol, consensus, and evidence. In addition to providing different types of use in the areas of health, personal data, and private life, it provides economic insight and raises the question: how can we make trust a new common good?

The booklet ends with a third section focused on the human factor. Exploring issues of governance, trust transitivity, networks of trust… New balances are being created and the relationship between social consensus and algorithmic consensus is constantly evolving.

 

This booklet, written by Aymeric Poulain-Maubant, an independent expert, benefited from contributions  from research professors from IMT schools: Claire Levallois-Barth (Télécom ParisTech), Patrick Waelbroeck (Télécom ParisTech), Maryline Laurent (Télécom SudParis), Armen Khatchatourov (Télécom École de Management) and Bruno Salgues (Mines Saint-Étienne). The Foundation’s corporate partners, specifically Accenture, Orange, La Poste Group, and La Caisse des Dépôts, also lent their expertise to this project.

[box type=”shadow” align=”” class=”” width=””]

Download here the booklet (in French)
The new balances of trust, between algorithms and social contract

Confiance numérique, Cahier de veille, Fondation Mines-Télécom

[/box]

[box type=”shadow” align=”” class=”” width=””]

On the occasion of the release of the Fondation Mines-Télécom booklet, I’MTech
publish a series devoted to the link between technologies and trust.

[/box]

Risks, La confiance, un outil pour réduire le risque technologique ?

Trust, a tool for reducing technological risks?

This article is part of our series on trust, published on the occasion of the release of the Fondation Mines-Télécom brochure: “The new balances of trust: between algorithms and social contract.

[divider style=”normal” top=”20″ bottom=”20″]

A sociologist with IMT Atlantique, Sophie Bretesché is specialized in the risks associated with technology. She has worked extensively on the issue of redeveloping former uranium mines in mainland France. She believes that trust between the stakeholders at each mine site is essential in better preventing risks. This emblematic case study is presented at a time when technological development seems out of control.

“Private property.” Those who enjoy walking in the woods are very familiar with this sign. Hanging there on a metal fence, it dashes any hopes of continuing further in search of mushrooms. Faced with this adversity, the walker will turn around, perhaps imagining that a magnificent forgotten castle lies hidden behind the fence. However, the reality is sometimes quite different. Although privately owned forest land does exist, some forests are home to former industrial sites with a heavy past, under the responsibility of operating companies. In Western France, for example, “private property” signs have replaced some signs that once read “radioactive waste storage.”

Inconvenient witnesses of the uranium mines that flourished in the territory from 1950 to the 1990s, these signs were removed by the private stakeholders in charge of operating and rehabilitating these sites. Forgetfulness is often the tool of choice for burying the debate on the future of this local heritage. Yet seeking to hide the traces of the uranium operations is not necessarily the best choice. “Any break in activities must be followed by historical continuity,” warns Sophie Bretesché, sociologist with IMT Atlantique and director of the Emerging risks and technology. In the case of the uranium mines, the stakeholders have not taken the traces left behind by the operations into account: they simply wanted them to be forgotten at a time when nuclear energy was controversial. The monitoring system carried out using measuring instruments was implemented on the sites without any continuity with the land’s history and past. According to the researcher, the problem associated with this type of risk management is that it can lead to “society mistrusting the decisions that are taken. Without taking the past and history into account, the risk measurements are meaningless and the future is uncertain.

This mistrust is only reinforced by the fact that these former mines that are not without risks. On certain sites, water seepage has been observed in nearby farming fields. There have also been incidents of roads collapsing, destroying trucks. When such events take place, citizens voice their anger, begin to take action, and initiate counter inquiries that inevitably bring to light the land’s uranium-linked past. Development projects for the sites are initiated without taking the presence of these former uranium mines into account. Locally, relations between public authorities, citizens and managing companies deteriorate in the absence of an agreement on the nature of the heritage and impacts left behind by uranium mining. The projects are challenged and sometimes the issues take several years to be resolved.

 

Trust for better risk prevention

There are, however, instances in which the mining heritage was taken into account from the start. Sophie Bretesché takes the example of a former site located 30 km from Nantes. When the question came up concerning “sterile” or waste rock—rocks that were extracted from the mines but contained amounts of uranium that were too low for processing—citizens from the surrounding areas were consulted. At their request, a map was created of the locations linked to the mining industry, explaining the history of the industry, and identifying the locations where waste rock is still stored. “New residents receive a brochure with this information,” the sociologist explains. “Though they could have tried to sweep this past under the rug, they chose transparency, clearly informing newcomers of the locations linked to the region’s mining history.

This example is emblematic of the role local culture can play in taking these risks into account. In the case of this mine, research was initiated to report on the site’s past. This initiative, carried out with former miners, elected officials, and environmental organizations, made it possible to write the site’s history based on the citizens’ knowledge. In order to prevent “history from stuttering”, the researcher followed the mining operations from the initial exploration phase to the present day. “It’s another way of addressing the risks,” Sophie Bretesché explains. “It allows citizens to get involved by accepting their local knowledge of the sites, and raises the issue of heritage in general. It’s a different way of conducting research by developing participatory science programs.

From an institutional standpoint, trust is established when the various economic stakeholders involved in a mine site accept to work together. Sophie Bretesché again takes this emblematic example of the former uranium mine bordering a quarry. The Local Information and Monitoring Commission (CLIS), chaired by the mayor of the municipality where the site was located, brought together the former site operator and the quarry operator. “The two industrialists take turns presenting the findings related to their activities. More than an institutional monitoring program, this initiative results in vigilance at the local level. This continuity with the industrial past, maintained by the presence of the quarry, is what enables this,” she explains. “The debate becomes more positive and results in better regulation throughout the territory.

 

Unpredictable technology

The trust factor is all the more crucial given the unpredictable nature of the risks related to science and technology. “The mines are emblematic of the difference in time-scales, between the use of technology and the management of its consequences,” Sophie Bretesché observes. Over a 40-year period, uranium mines sprung up across mainland France and then shut down. Yet the consequences and risks from the creation of these mines will continue for hundreds of years. Transparency and the transmission of information, already important in ensuring unity among the population at a given time, will become even more important in ensuring the population’s future resilience in light of risks spanning several generations.

In this regard, the mines are far from being an isolated example. The work Sophie Bretesché’s chair has conducted at IMT Atlantique is full of examples. “Email is a form of technology that is integrated into companies, yet it took society 30 years to realize it could have negative impacts on organizations,” she points out, in passing. While uranium mines and email inboxes are very different things, they both must adopt the same approach to more effectively preventing risks. In both cases, “the communities that are directly exposed to the risks must be taken into account. Local culture is vital in reducing risks, it must not be left out.”

 

Blockchain, trust, technology

Is blockchain the ultimate technology of trust?

This article is part of our series on trust, published on the occasion of the release of the Fondation Mines-Télécom booklet: “The new balances of trust: between algorithms and social contract.

[divider style=”normal” top=”20″ bottom=”20″]

Due to its decentralized nature, blockchain technology is fueling hopes of developing a robust trust system between economic stakeholders. But although it offers unique advantages, it is not perfect. As with any technology, the human factor must be considered. That factor alone warrants caution in considering the hype surrounding the blockchain, and vigilance regarding the arbitrations taking place between policymakers.

 

The blockchain is a tool of government distrust.” According to Patrick Waelbroeck, an economist with Télécom ParisTech, it is no coincidence that this technology was first introduced in 2008. In the context of the global financial crisis, the release of the first bitcoin blockchain testified to citizens’ loss of trust in a state monetary management system. It must be said that the past decade has been particularly hard for centralized financial organizations, in which transactions are controlled by institutions. In Greece, residents had daily ATM withdrawal limits of a few dozen euros. In India, bartering is now preferred over unstable currency.

In light of these risks, the blockchain solution is viewed as a more trustworthy alternative. Based on a decentralized framework, it theoretically offers greater security and transparency (read our article What is a Blockchain?). Furthermore, blockchains generating cryptocurrency offer parallel solutions for financial transactions. “Friedman and Hayek, both winners of the Nobel Prize in Economics, were supportive of alternative currencies such as this, because they offered the only means of preventing governments from financing their debts via inflation,” Patrick Waelbroeck points out. In the event of inflation, i.e. a drop in the value of the state currency, citizens would turn to alternative forms of currency.

In addition to this safe-guard aspect, blockchain technology, by its very nature, prevents falsifications and theft. It therefore naturally appears to be a solution for restoring trust among economic stakeholders, and resolving one of the major problems currently facing the markets: information asymmetry. “We are living in the age of the ‘black box’ society, in which information is difficult to access,” Patrick Waelbroeck observes. “Visibility has been lost regarding market realities. We have an example of this in the digital economy: users do not know what is done with their personal data.” An inquiry led by the Personal Data Values and Policy Chair in cooperation with Médiamétrie shows an increase in distrust among internet users. “The risk is that consumers could withdraw from these markets, resulting in their disappearance,” the economist explains.

And as the icing on the cake, the most commonly used public blockchains, such as Ethereum and Bitcoin, have what economists refer to as a positive network externality. This means that the more users a blockchain has, the more robust it is. Scaling up therefore provides greater security, and the users’ increased trust in the technology.

 

Humans behind the blockchain

The hopes for the blockchain are therefore great, and rightly so. Yet they must not cause us to forget that, as with any form of technology, it is developed and overseen by humans. This is a key point in understanding the technology’s limits, and a reason for vigilance that must kept in mind. Private blockchains have the advantage of allowing several tens of thousands of transactions to take place per second, because they are not as strict in verifying each transaction as the large public blockchains, like Bitcoin, are. But they are managed by consortiums that define the rules, often in small groups. “These groups can, overnight, decide to change the rules,” Patrick Waelbroeck warns. “Therefore, the question is whether people can make a credible commitment to the rules of governance.”

Public blockchains, too, are not without their faults, inherent in the human character of their users. Ethereum is becoming increasingly successful due to its “smart-contracts” that it allows to be anchored into its blockchain by having them certified by the network of users. Recently, an instance of smart-contract abuse was reported, in which a hacker utilized mistakes in a contract to receive the equivalent of several tens of thousands of euros. The architecture of blockchain is not at issue: a human mistake made when the contract was drafted is clearly what allowed this malicious use to occur.

The Ethereum users see smart-contracts as an option for standardizing the uses of this blockchain, by including ethical constraints, for example. “The idea being that if a smart-contract is used for illegal activity, the payment could be blocked,” Patrick Waelbroeck summarizes. But this perspective gives rise to many questions. First of all, how can ethics be translated into an algorithm? How can it be defined and must it be “frozen” in code? Given the fact that we do not all have the same visions of society, how can we reach a consensus? Even the very method for defining ethical rules appears to spark a debate. Not to mention that defining ethical rules implies making the technology accessible to some people and not others. From an ethical standpoint, is it possible to grant or prevent access to this technology based on the values the user upholds?

 

Trust and punishment

The operation of large public blockchains, though it may look simple, is built on complex human factors. “Little by little, communities are realizing that despite decentralization, they need trusted human third parties, called ‘oracles,’” the researcher explains. The oracles represent an additional type of actor in the blockchain organization, which increases the structure’s complexity. Their role is not the only one that has been born out of necessity. The difficulty beginners face in accessing this technology has led to the creation of intermediaries. “Teller” roles have emerged, for example, to facilitate the access of novices and their resources to the various registers.

The emergence of these new roles shows just how imponderable the human factor is in the development of blockchains, regardless of their size or how robust they are. Yet the more roles there are, the more fragile the trust in the system becomes. “For technology to be just and fair, everyone involved must play the part they are supposed to play,” Patrick Waelbroeck points out. “In a centralized system, the manager ensures that things are going well. But in a decentralized technological system, the technology itself must guarantee these aspects, and include a punitive aspect if the tasks are not carried out correctly.

From a legal standpoint, private blockchains can be regulated, because the responsible parties are identifiable. A punitive system ensuring users are not taken advantage of is therefore possible. On the other hand, the problem is very different for public blockchains. “Due to their international nature, it is unclear who is responsible for these block chains,” the economist points out. And without any established responsibility, what kind of assurances can users have regarding the strategic decisions made by the foundations in charge of the blockchains? Far different from the myths and candid hopes, the issue of trust is far from resolved regarding the blockchain. One thing is sure, in the future, this question will give rise to many debates at various levels of government in an attempt to better supervise this technology.

Trust, Armen Khatchatourov

Trust and identity in the digital world

This article is part of our series on trust, published on the occasion of the release of the Fondation Mines-Télécom booklet: “The new balances of trust: between algorithms and social contract.

[divider style=”normal” top=”20″ bottom=”20″]

What does “trust” mean today? Technology is redefining the concept of trust by placing the issue of identity transparency in the forefront. But according to Armen Khatchatourov, a philosopher with Télécom École de Management and member of the IMT chair Values and policy of personal information, the question is much deeper, and this approach is problematic. It reveals how trust is reduced to mere risk management in our social interactions. This new perception is worth challenging, since it disregards the complexity of what trust truly is.

 

In our digital societies, the issue of trust seems closely linked to identity transparency. In order for us to trust something, must we know who is behind the screen?

Armen Khatchatourov: It seems to me that reducing trust to an issue of identity is very simplistic. This association is increasingly present in our society, and bears witness to a shift in meaning. Trust would mean being able to verify the identity of those we interact with on digital networks—or at least their legitimacy—through institutional validation. This could mean validation by a government system or via technology. Yet this offers a view of trust that is solely an assessment of the risk taken in the context of interaction, and suggests that increasing trust is, above all, a matter of reducing risks. Yet in my opinion, this approach is very simplistic in that trust is not such a uniform concept; it includes many other aspects.

 

What are the other aspects involved in trust? 

AK: Work by sociologist Niklas Luhmann shows that there is another form of trust that is rooted in the interactions with those who are close to us. This typically means trust in the social system as a whole, or the trust we build with our friends or family. A child trusts his or her parents for reasons that are unrelated to any risk calculation. Luhmann used two different words in English—“trust” and “confidence”—which are both translated by one word in French: “confiance”. According to Luhmann, this nuance represented a reality: “trust” can be used to describe the type of assurance that, in its most extreme form, can be described as pure risk management, whereas “confidence” is more related to social interactions with those close to us, a type of attachment to society. However, things do not seem as consistent when we consider that both terms can apply to the same relationship. I would tend to use “confidence” in describing the relationship with my friends. But if I decide to create a startup with them, what I experience would be more appropriately described as “trust”. The opposite can also be true, of course, when repeated interactions lead to an attachment to a social system.

 

Does the idea of “trust” take precedence over the concept of “confidence”?

AK: Unfortunately, the difference between these two terms related to trust tends to be lost in our society, and there is a shift towards one standardized concept. We increasingly define trust as recommendations that are combined to form a rating on an application or service, or as a certification label. Economic theory has thematized this in the concept of information asymmetry reduction. Here we see the underlying conceptual framework and the primarily economic notion of risk it is associated with. Incidentally, this form of “trust” (as opposed to “confidence”) is based on opaque mechanisms. Today there is an algorithmic aspect that we are not aware of in the recommendations we receive. The mechanism for establishing this trust is therefore completely different from the way we learn to trust a friend.

 

So, is identity transparency a non-issue in the discussion on trust?

AK: Some people are reluctant to embrace pseudonymity. Yet a pseudonym is not a false identity. It is simply an identity that is separate from our civil identity, as defined by our identity card. In a sense, you have a sort of pseudonym in all traditional social relationships. When you meet someone in a bar and you develop a friendly or romantic relationship, you do not define yourself according to your civil identity, and you do not show your ID. Why should this be different for digital uses?

 

Aren’t there instances where it remains necessary to verify the identity of the individuals we are interacting with?

AK: Yes of course. When you buy or sell a house you go through a notary, who is a trusted third party. But this is not the issue. The real problem is that we increasingly have a natural tendency to react with an attitude of distrust. Wondering about the identity of the person offering a ride on the Blablacar carpooling website illustrates this shift: no one who is hitchhiking would ask the driver for his or her ID. What didn’t seem to pose a problem a few years ago has now become problematic. And today it is unheard of to say that transparency is not necessarily a sign of confidence, yet this is precisely the kind of issue we should be discussing.

 

Why should this shift be challenged?

AK: Here we need to look at the analysis, the approach at the heart of philosopher Michel Foucault’s work, of things that seemed to go without saying at a given time in history, from underlying mechanisms to representations accepted as essential components. He particularly examined the transition from one construction to another, the historical evolution. We are likewise in the midst of a new system, in which something like a “society” is attainable via social interactions. This shift in the themes of identity and trust bears witness to the changes taking place in society as a whole, and the changes in social connections. And this is not simply a matter of risk management, security, or economic efficiency.

 

Isn’t this identity-focused trust crisis contradictory in a context of personal data protection, which is increasingly necessary for new digital services?

AK: Yes, it is, and it’s a contradiction that illustrates the strains on the notion of identity. On the one hand, we are required to provide data to optimize services and demonstrate to other users that we are trustworthy users. On the other hand, there is an urge to protect ourselves, and even to become withdrawn. These two movements are contradictory. This is the complexity of this issue: there is no one-way, once-and-for-all trend. We are torn between, on the one side, a requirement and desire to share our personal data—desire because Facebook users enjoy sharing data on their profiles—and, on the other side, a desire and requirement to protect it—requirement because we are also driven by institutional discourse. Of course, my position is not against this institutional discourse. GDPR comes to mind here, and it is, as we speak, most welcome, as it provides a certain level of protection for personal data. However, it is important to understand the broader social trends, among which the institutional discourse represent only one element. These tensions surrounding identity inevitably impact the way we represent trust.

 

How does this affect trust?

AK: The chair Values and policy of personal information that I am a part of led an extensive inquiry with Médiamétrie on these issues of trust and personal data. We separately assessed users’ desires to protect their data, and their sense of powerlessness in doing so. The results show a sense of resignation among approximately 43% of those questioned. This part of the inquiry is a replication of a study carried out in 2015 in the United States by Joseph Turow and his team, in which they obtained results of a sense of resignation among 58% of respondents. This resignation results in individuals providing personal information not to gain an economic advantage, but rather because they feel it is unavoidable. These results inevitably raise the question of trust in this report. This is clearly an attitude that contradicts the assumptions some economists have made that the act of providing personal data is solely motivated by a cost-benefit balance the individual can gain from. This resignation reveals the tension that also surrounds the concept of trust. In a way, these users are neither experiencing trust nor confidence.

 

Fine particulate pollution: can we trust microsensor readings?

Nathalie RedonIMT Lille Douai – Institut Mines-Télécom

Last May, Paris City Hall launched “Pollutrack”: a fleet of micro sensors placed on the roofs of vehicles traveling throughout the capital to measure the amount of fine particles present in the air in real-time. A year before, Rennes proposed that residents participate in assessing the air quality via individual sensors.

In France, for several years, high concentrations of fine particles have been regularly observed, and air pollution has become a major health concern. Each year in France, 48,000 premature deaths are linked to air pollution.

The winter of 2017 was a prime example of this phenomenon, with daily levels reaching up to 100µg/m3 in certain areas, and with conditions stagnating for several days due to the cold and anticyclonic weather patterns.

 

A police sketch of the fine particle

A fine particle (particulate matter, abbreviated PM) is characterized by three main factors: its size, nature and concentration.

Its size, or rather its diameter, is one of the factors that affects our health: the PM10 have a diameter ranging from 2.5 to 10μm; PM2.5, a diameter less than 2.5μm. By way of comparison, one particle is approximately 10 to 100 times finer than a hair. And this is the problem: the smaller the particles we inhale, the more deeply they penetrate the lungs, leading to an inflammation of the lung alveoli, as well as the cardiovascular system.

The nature of these fine particles is also problematic. They are made up of a mixture of organic and mineral substances with varying degrees of danger: water and carbon form the base around which condense sulfates, nitrates, allergens, heavy metals and other hydrocarbons with proven carcinogenic properties.

As for their concentration, the greater it is in terms of mass, the greater the health risk. The World Health Organization recommends not to exceed personal exposure of 25 μg/m3 for the PM2.5 as a 24-hour average and 50 μg/m3 for the PM10. In recent years, thresholds have been constantly exceeded, especially large cities.

 

particules fines

The website for the BreatheLife campaign, created by WHO, where you can enter the name of a city and find out its air quality. Here, the example of Grenoble is given.

 

Humans are not the only ones affected by the danger of these fine particles: when they are deposited, they contribute to the enrichment of natural environments, which can also lead to an eutrophication, phenomena, meaning excess amounts of nutriments, such as the nitrogen carried by the particles, are deposited in the soil or water. For example, this leads to algal blooms that can suffocate local ecosystems. In addition, due to the chemical reaction of the nitrogen with the surrounding environment, the eutrophication generally leads to soil acidification. Soil that is more acidic becomes drastically less fertile: vegetation becomes depleted, and slowly but inexorably, species die off.

 

Where do they come from?

Fine particle emissions primarily originate from human activities: 60% of PM10 and 40% of PM2.5 are generated from wood combustion, especially from fireplace or stove heating, 20% to 30% originate from automotive fuel (diesel is the number one). Finally, nearly 19% of national PM10 emissions, and 10% PM2.5 emissions result from agricultural activities.

To help public authorities limit and control these emissions, the scientific community must improve the identification and quantification of these sources of emissions, and must gain a better understanding of their spatial and temporal variability.

 

Complex and costly readings

Today, fine particle readings are primarily based on two techniques.

First, samples are taken from filters; these are taken after an entire day and are then analyzed in a laboratory. Aside from the fact that the data is delayed, the analytical equipment used is costly and complicated to use; a certain level of expertise is required to interpret the results.

The other technique involves making measurements in real time, using tools like the Multi-wavelength Aethalometer AE33, a device that is relatively expensive, at over €30,000, but has the advantage of providing measurements every minute or even under a minute. It is also able to monitor black carbon (BC): it can identify the particles that originate specifically from combustion reactions. The aerosol chemical speciation monitor (ACSM) is also worth mentioning, as it makes it possible to identify the nature of the particles, and takes measurements every 30 minutes. However, its cost of €150,000 means that access to this type of tool is limited to laboratory experts.

Given their cost and level of sophistication, there are a limited number of sites in France that are equipped with these tools. Thanks to these simulations, the analysis of daily averages makes it possible to create maps with a 50km by 50km grid.

Since these means of measurement do not make it possible to establish a real-time map with finer spatio-temporal scales—in terms of the km2 and minutes—the scientists have recently begun looking to new tools: particle microsensors.

 

How do microsensors work?

Small, light, portable, inexpensive, easy to use, connected… microsensors appear to offer many advantages that complement the range of heavy analytical techniques mentioned above.

But how credible are these new devices? To answer this question, we need to look at their physical and metrological characteristics.

At present, several manufactures are competing for the microsensor market: the British Alphasense, the Chinese Shinyei and the American manufacturer, Honeywell. They all use the same measurement method: optical detection using a laser diode.

The principle is simple: the air, sucked in by the fan, flows through the detection chamber, which is configurated to remove the larger particles, and retain only the fine particles. The air, loaded with particles, flows through the optical signal emitted by the laser diode, the beam of which is diffracted by a lens.

A photodetector placed opposite the emitted beam records decreases in luminosity caused by the passing particles, and counts the number by size ranges. The electrical signal from the photodiode is then transmitted to a microcontroller that processes the data in real time: if the air flow rate is known, the concentration number can then be determined, and then the mass, based on the size ranges, as seen in the figure below.

 

An example of a particle sensor (brand: Honeywell, HPM series)

 

From the most basic to the fully integrated version (including acquisition and data processing software, and measurement transmission via cloud computing), the price can range from €20 to €1,000 for the most elaborate systems. This is very affordable, compared to the techniques mentioned above.

 

Can we trust microsensors?

First, it should be noted that these microsensors do not provide any information on the fine particles’ chemical composition. Only the techniques described above can do that. However, knowledge of the particles’ nature provides information about their source.

Furthermore, the microsensor system used to separate particles by size is often rudimentary; field tests have shown that while the finest particles (PM2.5) are monitored fairly well, it is often difficult to extract the PM10 fraction alone. However, the finest particles are precisely what affect our health the most, so this shortcoming is not problematic.

In terms of the detection/quantification limits, when the sensors are new, it is possible to reach reasonable thresholds of approximately 10µg/m3. They also have sensitivity levels between 2 and 3µg/m3 (with an uncertainty of approximately 25%), which is more than sufficient for monitoring the dynamics of how the particle concentrations change in the concentration range of up to 200µg/m3.

However, over time, the fluidics and optical detectors of these systems tend to become clogged, leading to errors in the results. Microsensors must therefore be regularly calibrated by connecting them to reference data, such as the data released by air pollution control agencies.

This type of tool is therefore ideally suited for an instantaneous and semi-quantitative diagnosis. The idea is not to provide an extremely precise measurement, but rather to report on the dynamic changes in particulate air pollution on a scale with low/medium/high levels. Due to the low cost of these tools, they can be distributed in large numbers in the field, and therefore help provide a better understanding of particulate matter emissions.

 

Nathalie Redon, Assistant Professor, Co-Director of the “Sensors” Laboratory, IMT Lille Douai – Institut Mines-Télécom

This article was originally published in French on The Conversation.

 

qualité de l'air, modélisation, air quality, modeling

Air quality: several approaches to modeling the invisible

The theme day on air quality modeling (organized by FIMEA and IMT Lille Douai) on June 8 provided an opportunity for researchers in this field to exchange on existing methods. Modeling makes it possible to identify the link between pollution sources and receptors. These models help provide an understanding of atmospheric processes and air pollution prevention.

 

What will the pollution be like tomorrow? Only one tool can provide answer: modeling. But what is modeling? It all depends the area of expertise. In the field of air quality, this method involves creating computer simulations to represent different scenarios. For example, it enables pollutant emissions to be simulated before building a new highway. Just as meteorological models predict rain, an air quality model predicts pollutant concentrations. Modeling also provides a better understanding of the physical and chemical reactions that take place in the atmosphere. “There are models that cover smaller and larger areas, which make it possible to study the air quality for a continent, region, or even for one street,” explains Stéphane Sauvage, a researcher with the Atmospheric Sciences and Environmental Engineering Department (SAGE) at IMT Lille-Douai. How are these models developed?

 

Models, going back to the source

The first approach involves identifying the sources that emit the pollutants via field observations, an area of expertise at IMT Lille-Douai. Sensors located near the receptors (individuals, ecosystems) measure the compounds in the form of gas or particles (aerosols). The researchers refer to certain types that are detected as tracers, because they are representative of a known source of emissions. “Several VOC (Volatile Organic Compounds) are emitted by plants, whereas other kinds are typical of road traffic. We can also identify an aerosol’s origin (natural, wood combustion…) by analyzing its chemical composition,” Stéphane Sauvage explains.

The researchers study the hourly, daily, and seasonal variability of the tracers through statistical analysis. These variations are combined with models that trace the path air masses followed before reaching the observation site. “Through this temporal and spatial approach, we can succeed in reproducing the potential areas of origin. We observe ‘primary’ pollutants, which are directly emitted by the sources, and are measured by the receptors. But secondary pollutants also exist; the result of chemical reactions that take place in the atmosphere,” the researcher adds. To identify the sources of this second category of pollutants, researchers identify the reactions that could possibly take place between chemical components. This is a complex process, since the atmosphere is truly a reactor, within which different species are constantly being transformed. Therefore, the researchers come up with hypotheses to enable them to find the sources. Once these models are functional, they are used as decision-making tools.

 

Models focused on receptors

A second approach, referred to as the “deterministic” modeling, is focused on the receptors. Based on what they know about the sources (concentrations of industrial waste and of road traffic…), the researchers use air mass diffusion and movement models to visualize the impact these emissions have on the receptor. To accomplish this, the models integrate meteorological data (wind, temperature, pressure…) and the equations of the chemical reactions taking place in the atmosphere. These complex tools require a comprehensive knowledge of atmospheric processes and high levels of computing power.

These models are used for forecasting purposes. “air pollution control agencies use them to inform the public of the levels of pollutants in a given area. If necessary, the prefecture can impose driving restrictions based on the forecasts these models provide,” explains Stéphane Sauvage. This modeling approach also makes it possible to simulate environmental impact assessments for industrial sites.

 

Complementary methods

Both methods have their have limits and involve uncertainties. The models based on observations are not comprehensive. “We do not know how to observe all the species. In addition, this statistical approach requires a large amount of observations to be made before a reliable and robust model can be developed. The hypotheses used in this approach are simplistic compared to the receptor-focused models,” Stéphane Sauvage adds. The other type of model also relies on estimations. It uses data that can be uncertain, such as the estimation of the sources’ emissions and the weather forecasts.

We can combine these two methods to obtain tools that are more effective. The observation-based approaches make it possible to assess information about the sources, which is useful for the deterministic models. The deterministic models are validated by comparing the predictions with the observations. But we can also integrate the observed data into the models to correct them,” the researcher adds. This combination limits the uncertainties involved and supports the identification of links between the sources and receptors. The long-term objective is to propose decision-making tools for policies aimed at effectively reducing pollutants.

 

Jean-Luc Dugelay, Biométrie, smartphone, biométrics, iris recognition

Iris recognition: towards a biometric system for smartphones

Smartphones provide a wide range of opportunities for biometrics. Jean-Luc Dugelay and Chiara Galdi, researchers at Eurecom, are working on a simple, rapid iris recognition algorithm for mobile phones, which could be used as an authentication system for operations such as bank transactions.

 

Last name, first name, e-mail address, social media, photographs — your smartphone is a complete summary of your personal information. In the near future, this extremely personal device-tool could even take on the role of digital passport. A number of biometric systems are being explored in order to secure access to these devices. Facial, digital, or iris recognition have the advantage of being recognized by the authorities, making them more popular options, even for research. Jean-Luc Dugelay is a researcher specialized in image processing at Eurecom. He is working with Chiara Galdi to develop an algorithm designed especially for iris recognition on smartphones. The initial results of the study were published in May 2017 in Pattern Recognition Letters. Their objective? Develop an instant, easy-to-use system for mobile devices.

 

The eye: three components for differentiating between individuals

Biometric iris recognition generally uses infrared light, which allows for greater visibility of the characteristics which differentiate one eye from another. “To create a system for the general public, we have to consider the type of technology people have. We have therefore adopted a technique using visible light so as to ensure compatibility with mobile phones,” explains Jean-Luc Dugelay.

 

oeil, spot, biométrie

Examples of color spots

 

The result is the FIRE (Fast Iris REcognition) algorithm, which is based on an evaluation of three parameters of the eye: color, texture, and spots. In everyday life, eye color is approximated by generic shades like blue, green or brown. In FIRE, it is defined by a colorimetric composition diagram. Eye texture corresponds to the ridges and ligaments that form the patterns of the iris. Finally, spots are the small dots of color within the iris. Together, these three parameters are what make the eyes of one individual unique from all others.

 

FIRE methodology and validation

When images of irises from databases were used to test the FIRE algorithm, variations in lighting conditions from different photographs created difficulties. To remove variations in brightness, the researcher applied a technique to standardize the colors. “The best-known color space is red-green-blue (RGB) but other systems exist, such as LAB. This is space where color is expressed according to the lightness ‘L,’ and A and B, which are chromatic components. We are focusing on these last two aspects rather than the overall definition of color, which allows us to exclude lighting conditions,” explains Jean-Luc Dugelay.

An algorithm then carries out an in-depth analysis of each of the three parameters of the eye. In order to compare two irises, each parameter is studied twice: once on the eye being tested, and once on the reference eye. Distance calculations are then established to represent the degree of similarity between the two irises. These three calculations result in scores which are then merged together by a single algorithm. However, the three parameters do not have the same degree of reliability in terms of distinguishing between two irises. Texture is a defining element, while color is a less discriminating characteristic. This is why, in merging the scores to produce the final calculation, each parameter is weighted according to how effective it is in comparison to the others.

 

Authentication, identification, security and protocol

This algorithm can be used according to two possible configurations which determine its execution time. In the case of authentication, it is used to compare the dimensions of the iris with those of the person who owns the phone. This procedure could be used to unlock a smartphone or confirm bank transactions. The algorithm gives a result in one second. However, when used for identification purposes, the issue is not knowing if the iris is your own, but rather to whom it corresponds. The algorithm could therefore be used for identity verification purposes. This is the basis for the H2020 PROTECT project in which Eurecom is taking part. Individuals would no longer be required to get out of their vehicles when crossing a border, for example, since they could identify themselves at the checkpoint from their mobile phones.

Although FIRE has successfully demonstrated that the iris recognition technique can be adapted for visible light and mobile devices, protocol issues must still be studied before making this system available to the general public. “Even if the algorithm never made mistakes in visible light, it would also have to be proven to be reliable in terms of performance and strong enough to withstand attacks. This use of biometrics also raises questions about privacy: what information is transmitted, to whom, who could store it etc.,” adds Jean-Luc Dugelay.

Several prospects are possible for the future. First of all, the security of the system could be increased. “Each sensor associated with a camera has a specific noise which distinguishes it from all other sensors. It’s like digital ballistics. The system could then verify that it is indeed your iris and in addition, verify that it is your mobile phone based on the noise in the image. This would make the protocol more difficult to pirate,” explains Jean-Luc Dugelay. Other possible solutions may emerge in the future, but in the meantime, the algorithmic portion developed by the Eurocom team is well on its way to becoming operational.

 

réseau de chaleur, heating networks

Improving heating network performance through modeling

At the IMT “Energy in the Digital Revolution” conference held on April 28, Bruno Lacarrière, an energetics researcher with IMT Atlantique, presented modeling approaches for improving the management of heating networks. Combined with digital technology, these approaches support heating distribution networks in the transition towards smart management solutions.

The building sector accounts for 40% of European energy consumption. As a result, renovating this energy-intensive sector is an important measure in the law on the energy transition for green growth. This law aims to improve energy efficiency and reduce greenhouse gas emissions. In this context, heating networks currently account for approximately 10% of the heat distributed in Europe. These systems deliver heating and domestic hot water from all energy sources, although today the majority are fossil-fueled. “The heating networks use old technology that, for the most part, is not managed in an optimal manner. Just like smart grids for electrical networks, they must benefit from new technology to ensure better environmental and energy management,” explains Bruno Lacarrière, a researcher with IMT Atlantique.

 

Understanding the structure of an urban heating network

A heating network is made up of a set of pipes that run through the city, connecting energy sources to buildings. Its purpose is to transport heat over long distances while limiting loss. In a given network, there may be several sources (or production units). The sources may come from a waste incineration plant, a gas or biomass heating plant, an industrial residual surplus, or a geothermal power plant. These units are connected by pipelines carrying heat in the form of liquid water (or occasionally vapor) to substations. These substations then redistribute the heat to the different buildings.

These seemingly simple networks are in fact becoming increasingly complex. As cities are transforming, new energy sources and new consumers are being added. “We now have configurations that are more or less centralized, which are at times intermittent. What is the best way to manage the overall system? The complexity of these networks is similar to the configuration of electrical networks, on a smaller scale. This is why we are looking at the whether a “smart” approach could be used for heating networks,” explains Bruno Lacarrière.

 

Modeling and simulation for the a priori and a posteriori assessment of heating networks

To deal with this issue, researchers are working on a modeling approach for the heating networks and the demand. In order to develop the most reliable model, a minimum amount of information is required.  For demand, the consumption data history for the buildings can be used to anticipate future needs. However, this data is not always available. The researchers can also develop physical models based on a minimal knowledge of the buildings’ characteristics. Yet some information remains unavailable. “We do not have access to all the buildings’ technical information. We also lack information on the inhabitants’ occupancy of the buildings,” Bruno Lacarrière points out. “We rely on simplified approaches, based on hypotheses or external references.

For the heating networks, researchers assess the best use of the heat sources (fossil, renewable, intermittent, storage, excess heat…). This is carried out based on the a priori knowledge of the production units they are connected to. The entire network must provide the heat distribution energy service. And this must be done in a cost-effective and environmentally-friendly manner. “Our models allow us to simulate the entire system, taking into account the constraints and characteristics of the sources and the distribution. The demand then becomes a constraint.”

The models that are developed are then used in various applications. The demand simulation is used to measure the direct and indirect impacts climate change will have on a neighborhood. It makes it possible to assess the heat networks in a mild climate scenario and with high performance buildings. The heat network models are used to improve the management and operation strategies for the existing networks. Together, both types of models help determine the potential effectiveness of deploying information and communication technology for smart heating networks.

 

The move towards smart heating networks

Heating networks are entering their fourth generation. This means that they are operating at lower temperatures. “We are therefore looking at the idea of networks with different temperature levels, while examining how this would affect the overall operation,” the researcher adds.

In addition to the modelling approach, the use of information and communication technology allows for an increase in the networks’ efficiency, as was the case for electricity (smart monitoring, smart control). “We are assessing this potential based on the technology’s capacity to better meet demand at the right cost,” Bruno Lacarrière explains.

Deploying this technology in the substations, and the information provided by the simulation tools, go hand in hand with the prospect of deploying more decentralized production or storage units, turning consumers into consumer-producers [1], and even connecting to networks of other energy forms (e.g. electricity networks), thus reinforcing the concept of smart networks and the need for related research.

 

[1] The consumers become energy consumers and/or producers in an intermittent manner. This is due to the deployment of decentralized production systems (e.g. solar panels).

 

This article is part of our dossier Digital technology and energy: inseparable transitions!

 

Also read on I’MTech

How biomechanics can impact medicine – Interview with Jay Humphrey

[dropcap]I[/dropcap]t is a love for mechanics and mathematics, as well as an intense interest in biology and health, that led Jay Humphrey towards the field of biomechanics. Right after his PhD from the Georgia Institute of Technology in Engineering Science and Mechanics, in 1985, he pursued post-doctoral training in cardiovascular research at The Johns Hopkins University School of Medicine. Now a researcher at Yale University, his pioneering research in the field helps predict and understand aortic aneurysms and dissections. On 29 June this year, IMT awarded him the honoris causa title for this groundbreaking work and the projects he undertook with Mines Saint-Étienne. On this occasion, I’MTech asked him a few questions about his vision on biomechanics, his perception of his work, and how he thinks his community impacts medicine.

 

How would you define the field of biomechanics?

Jay Humphrey: A general definition of biomechanics is the development, extension, and application of mechanics to study living things and the materials or structures with which they interact. For the general public, however, it is also good to point out that biomechanics is important from the level of the whole body to organs, tissues, cells, and evens proteins! Mechanics helps us to understand how proteins fold or how they interact as well as how cells and tissues respond to applied loads. There is even a new area of application we call mechanochemistry, where scientists study how molecular mechanics influences the rate of reactions in the body. Biomechanics is thus a very broad field.

 

With protein and tissue studies, biomechanics seems to have a very modern dimension, doesn’t it?

JH: In a way, biomechanics dates back to antiquity. When humans first picked up a stick and used it to straighten up, it was biomechanics. But the field as we know it today emerged in the mid 1960’s, with early studies on cells — the red blood cells were the first to be studied. So, we have been interested in detailed tissue and cells mechanics for only about 50 years. Protein mechanics is even newer.

 

Why do you think the merger between mechanics and biology occurred in the 60’s? Has there been a catalyst for it?

JH: There are probably five reasons why biomechanics emerged in the mid 60’s. First, the post-World War II era included a renaissance in mechanics; scientists gained a more complete understanding of the nonlinear field theories. At the same time, computers emerged, which were needed to solve mathematically complex problems in biology and mechanics — this is a second reason. Third, numerical methods, in particular finite element methods, appeared and helped in understanding system dynamics. Another important reason was the space race and the question, ‘How will people respond in outer space, in a zero-gravity environment?’, which is fundamentally a biomechanical question. And finally, this was also the period in which key molecular structures were discovered, like that for DNA (double helix) or collagen (triple helix), the most abundant protein in our bodies. This raised questions about their biomechanical properties.

 

So technological breakthroughs definitely have played an important part in the development of biomechanics. Are recent advances still giving you new perspectives for the field?

JH: Today, technological advances give us the possibility to perform high-resolution measurements and imaging. At the same time, there have been great advances in understanding the genome, and how mechanics influence gene expression. I have been a strong advocate of relating biomechanics – which relies on theoretical principles and concepts of mechanics – to what we call today mechanobiology — how cells respond to mechanical stimuli. There has been interest in this relationship since the mid 70’s, but we have only understood better the way a cell responds to its mechanical environment by changing the genes it expresses since the 90’s.

 

Is interdisciplinary research an important aspect of biomechanics?

JH: Yes, biomechanics benefited tremendously from interdisciplinarity. Many fields and professions must work together: engineers, mathematicians, biochemists, clinicians, and material scientists to name a few. And again, this has been improved through technology: the internet allowed better international collaborations, including web-based exchanges of data and computer programs, both of which increase knowledge.

 

Would you say your partnership with Stéphane Avril and Mines Saint-Étienne is a typical example of such a collaboration?

JH: Yes, and it is interesting how it came about. I have a colleague in Italy, Katia Genovese, who is an expert in optical engineering. She developed an experimental device to improve imaging capabilities during mechanical testing of arteries. We worked with her to increase its applicability to vascular mechanics studies, but we also needed someone with expertise in numerical methods to analyse and interpret the data. Hence, we partnered with Stéphane Avril who had these skills. The three of us could then describe vascular mechanics in a way that was not possible before; not one of us could have done it alone, however, we needed to collaborate. Working together, we developed a new methodology and now we use it to better understand artery dissections and aneurysms. For me, the Honoris causa title I have been awarded recognizes the importance of this international collaboration in some way, and I am very pleased for that.

Including this research on aneurysms and cardiovascular system description as well as everything else that you have worked on, what is your scientific contribution that you are the proudest of today?

JH: I am proud of a new theory that we proposed, called a ‘constrained mixture theory’ for soft tissue growth and remodelling. It allows one not only to describe a cell or tissue at its current time and place, but also how it can evolve, how it will change when subjected to mechanical loads. The word ‘mixture’ is important for tissues consist of multiple constituents mixed together: for example, smooth muscle cells, collagen fibers, and elastic fibers in arteries. This is what we call a mixture. It is through interactions among these constituents, as well as through individual properties of each, that the tissue function is achieved. Based on a precise description of these properties, we can describe for instance how an artery will be impacted by a disease, and how it will be altered due to changes in blood circulation. I think this type of predictive capability will help us design better medical devices and therapeutic interventions.

 

Could it be a game changer in medicine?

JH: ‘Game changer’ is a strong word, but our research advances definitely have some clear clinical application. The method we developed to predict where and when a blood clot will form in an aneurysm has the potential to better understand and predict patient outcome. The constrained mixture theory could also have real application in the emerging area of tissue engineering. For example, we are working with Dr. Chris Breuer, a paediatric expert at Nationwide Children’s Hospital in Columbus Ohio, on the use of our theory to design polymeric scaffolds for replacing blood vessels in children with congenital defects. The idea is that the synthetic component will slowly degrade and be replaced by body’s own cells and tissues. Clinical trials are in progress in the US, and we are very excited about this project.

 

These are really concrete examples of how biomechanics research can lead to significant changes in medical operations and processes. Is it a purpose you always had in mind through your career?

JH: About seven years ago, my work was still fundamental science. I then decided to move to Yale University to interact more closely with medical colleagues. So my interest in clinical application is recent. But since we are talking about how biomechanics could be a game changer, I can give you two major breakthroughs made by some colleagues at Stanford University that show how medicine is impacted. Dr. Alison Marsden uses a computational biomechanical model to improve surgical planning, including that for the same tissue engineered artery that we are working on. And Dr. Charles Taylor has started a new company called HeartFlow that he hopes will allow computational biomechanics to replace a very invasive diagnostic approach with a non-invasive approach. There is great promise in this idea and ones like it.

 

What are your projects for the upcoming years?

JH: I plan to focus on three main areas for the future. First is designing better vascular grafts using computational methods. I also hope to increase our understanding of aneurysms, from a biomechanical and a genetic basis. And third is understanding the role of blood clots in thrombosis. These are my goals for the years to come.

 

Energy transition, Digital technology transition

Digital technology and energy: inseparable transitions

[dropcap]W[/dropcap]hat if one transition was inextricably linked with another? Faced with environmental challenges, population growth and the emergence of new uses, a transformation is underway in the energy sector. Renewable resources are playing a larger role in the production of the energy mix, advances in housing have helped reduce heat loss and modes of transportation are changing their models to limit the use of fossil fuels. But even beyond these major metamorphoses, the energy transition in progress is intertwined with that of digital technology. Electrical grids, like heat networks, are becoming “smart.” Modeling is now seen as indispensable from the earliest stages of design or renovation or buildings.

The line dividing these two transitions is indeed so fine that it is becoming difficult to determine to which category belong the changes taking place in the world of connected devices and telecommunications. For mobile phone operators, power supply management for mobile networks is a crucial issue. The proportion of renewable energy must be increased, but this leads to a lower quality of service. How can the right balance be determined?  And telephones themselves pose challenges for improving energy autonomy, in terms of both hardware and software.

This interconnection illustrates the complexity of the changes taking shape in contemporary societies. In this report we seek to present issues situated at the interface between energy and digital technology. Through research carried out in the laboratories of IMT graduate schools, we outline some of the major challenges currently facing civil society, economic stakeholders and public policymakers.

For consumers, the forces at play in the energy sector may appear complex. Often reduced to a sort of technological optimism without taking account of scientific reality, they are influenced by significant political and economic issues. The first part of this report helps reframe the debate while providing an overview of the energy transition through an interview with Bernard Bourges, a researcher who specializes in this subject.  A European SEAS project is explained as a concrete example of the transformations underway in order to provide a look at the reality behind the promises of smart grids.

[one_half]

[/one_half][one_half_last]

[/one_half_last]

The second part of the report focuses on heat networks, which, like electric networks, can also be improved with the help of algorithms. Heat networks represent 9% of the heat distributed in Europe and can therefore represent a catalyst for reducing energy costs in buildings. Bruno Lacarrière’s research illustrates the importance of digital modeling in the optimization of these networks (article to come). And because reducing heat loss is also important at the level of individual buildings, we take a closer look at Sanda Leteriu’s research on how to improve energy performance for homes.

[one_half]

[/one_half][one_half_last]

[/one_half_last]

The report concludes with a third section dedicated to the telecommunications sector. An overview of Loutfi Nuaymi’s work highlights the role played by operators in optimizing the energy efficiency of their networks and demonstrates how important algorithms are becoming for them. We also examine how electric consumption can be regulated by reducing the demand for calculations in our mobile phones, with a look at research by Maurice Gagnaire. Finally, since connected devices require ever-more powerful batteries, the last article explores a new generation of lithium batteries, and the high hopes for the technologies being developed in Thierry Djenizian’s laboratory.

[one_half]

[/one_half][one_half_last]

[/one_half_last]

 

[divider style=”normal” top=”20″ bottom=”20″]

To further explore this topic:

To learn more about how the digital and energy transitions are intertwined, we suggest these related articles from the IMTech archives:

5G will also consume less energy

In Nantes the smart city becomes a reality with mySMARTLife

Data centers: taking up the energy challenge

The bitcoin and blockchain: energy hogs

[divider style=”normal” top=”20″ bottom=”20″]