Fake News

A real way to look at fake news

The SARS-CoV2 virus is not the only thing that has spread during the Covid-19 pandemic: fake news has also made its way around the world. Although it existed even before, the unprecedented crisis has paved the way for an explosion of fake news. Anuragini Shirish, a researcher at Institut Mines Télécom Business School, explains the factors at play in this trend and how it could be limited in the future.

Why has the pandemic been conducive to the emergence of fake news?

Anuragini Shirish: At the individual level, fear and uncertainty are psychological factors that have played an important role. People have several fears that pertain to safety of their lives and that of their families, jobs, resources leading to unexplained uncertainty about both the present and the future. As a response to this situation, people try to make sense of the situation and understand what’s going to happen to reassure themselves, from both the health  and economic point of views. To do so, they look for information, regardless of how truthful it is.

How do individuals seek guidance in an unpredictable situation?

AS: The main sources of guidance are institutional resources. One of the important resources is the freedom of the media. In countries like India, the media can be influenced by politicians and people tend not to trust it entirely. In Nordic countries, on the other hand, the media focuses on being as objective as possible and people are taught to adhere to objectivity. When trust in the traditional media is low, as may be the case in France, individuals tend to seek out alternative sources of information. Freedom of the media is therefore an institutional resource: if people have confidence in the strength and impartiality of their media, it tends to lower their level of fear and uncertainty.

Another important resource is the government’s measures to increase economic freedom perceptions. If individuals believe that the government can maintain job security and/or their sources of income throughout the pandemic, including periods of lockdown, this also helps reduce their fear and uncertainty. In countries such as Brazil, India and the United States, this has not been the case.

Lastly, there is the freedom of political expression, which gives individuals the opportunity to express and share their doubts publicly.  But in this case, it tends to foster the emergence of fake news. This is one of the findings of a study we conducted with Shirish Srivastava and Shalini Chandra from HEC Paris and the SP Jain School of Global Management.

How is the lack of confidence in institutions conducive to the emergence and dissemination of fake news?

AS : When people trust institutions, they are less likely to seek information from alternative sources. Conversely, when there is a low level of trust in institutions, people tend to react by seeking out all kinds of information on the internet.

Why and how has fake news spread to such an extent?

AS: In order to verify the information they obtain, people tend to share it with their acquaintances and close friends to get their feedback about the validity of the information. And due to their cognitive biases, people tend to consume and share ideas and beliefs they like, even when they’re aware that the information may be false. Fake news are generally structured to evoke a variety of emotions, leading to strong feelings such as anger, fear, sadness, which also helps it to spread more easily than information presented in a more rational or neutral way. 

Each country has its own characteristics when it comes to the emergence and dissemination of fake news, which also explains why an understanding of institutional resources is helpful to identify the factors that can explain the national level differences at play. The emergence  and dissemination of fake news vary widely from country to country: the inhabitants of a country are far more concerned about what’s happening in their own country. Fake news is therefore highly context-specific.

Where is most fake news found?

AS: The majority of fake news is found on social media. That’s where it spreads the quickest since it is extremely easy to share. Social media algorithms will also display the information that people like the most, therefore increasing their cognitive biases and their desire to share this information. And social media is the number-one media consumed by individuals, due to its ease of mobile access and connectivity available at several countries in the world.

Who creates fake news?

AS: It’s hard to understand the deeper motivations of each individual who creates fake news, since they don’t typically brag about it! Some may do so for economic reasons, by generating “clicks” and the revenue that comes with them. Almost half of fake news is generated for political reasons, to destabilize opposing parties. And sometimes it comes directly from political parties. Uncertain situations like pandemics polarize individuals in society, which facilitates this process. And then there are individuals who may just want to create general confusion, for no apparent economic or political motives.

How can we as individuals contribute to limiting the spread of fake news?

AS: When we aren’t sure about the validity of information, we must not act on it, or share it with others before finding out more. It’s a human tendency to try to verify the legitimacy of information by sharing it, but that’s a bad strategy at a larger scale.  

How can we tell if information may be false?

AS: : First of all, we must learn to think critically and not accept everything we see. We must critically examine the source or website that has posted the information and ask why. There is an especially high level of critical thinking in countries such as Finland or the Netherlands, since these skills are taught at high schools and universities, in particular through media studies classes. But in countries where people are not taught to think critically to the same extent, and trust in the media is low, paradoxically, people are more critical of information that comes from the institutional media than of that which comes from social media. Tools like Disinformation Index or Factcheck.org may be used to verify sources in order to check whether or not information is authentic.  

Is fake news dangerous?

AS: It depends on the news. During the pandemic, certain light-hearted fake news was spread. It didn’t help people solve their problems, but it provided entertainment for those who needed it. For example, there was a tweet that appeared in March 2020 saying that a group of elephants in the Yunnan province in China, had drunk corn wine and fallen asleep, amid the recommendations for social distancing.  This tweet was shared 264,000 times and got 915,500 likes and 5,000 comments. This tweet was later “debunked” (proven to be false) in an article that appeared in National Geographic. This kind of fake news does not have any harmful consequences.  

But other kinds of fake news have had far more serious consequences. First, political fake news generally reduces trust in institutional resources.  It doesn’t offer any solutions and creates more confusion. Paradoxically, this increases fear and uncertainty in individuals and facilitates the dissemination of more fake news, creating a vicious circle! Since it reduces institutional trust, government programs have less of an impact, which also has economic implications. During the pandemic, this has had a major impact on health. Not only because the vaccine campaigns have had less of an effect, but because people self-medicated  based on fake news and died as a result. People’s mental health has also suffered through prolonged exposure to uncertainty, at times leading to mental illness or even suicide. This is also why the term “infodemic” has appeared. 

Is social media trying to fight the spread of fake news?  

AS: During the pandemic, content regulation by the platforms has increased, in particular through  UN injunctions and the gradual implementation of the Digital Service Act. For example, Twitter, Facebook and Instagram are trying to provide tools to inform their users which information may be inauthentic.  The platforms were not prepared for this kind of regulation, and they generated a lot of revenue from the large volume of information being shared, whether or not it was true.  This is changing – let’s hope that this continues over time!

Read more on I’MTech: Digital Service Act: Regulating the content of digital platforms Act 1

What are the levels of institutional control over fake news?

AS: Control over information must be carried out through various approaches since it affects many aspects of society. The government can increase its presence in the media and social media, and improve internet security. There are two ways of doing this: through the law, by punishing the perpetrator of fake news, but also by increasing collective awareness and providing programs to teach people how to verify information. It’s important to put this aspect in place ahead of time, in order to anticipate potential crises that may occur in the future and to monitor collective awareness levels . However, the goal is not to control the freedom of media, on the contrary,  this freedom increases the contribution of independent media, and signals to the citizens that the government seeks to be impartial.

How can we improve people’s relationship with information and institutions in general?

AS: Individuals’ behavior is difficult to change in the long term: new regulations are ultimately violated when people see them as meaningless. So, we must also help citizens find value in the rules of society that may be put in place by the government, in order for them to adhere to them.

By Antonin Counillon

Easier access to research infrastructure for the European atmospheric science community

Improving access to large facilities for research on climate and air quality and optimizing use are the objectives of the European ATMO-ACCESS project. Véronique Riffault and Stéphane Sauvage, researchers at IMT Nord Europe, one of the project’s 38 partner institutions, explain the issues involved.

What was the context for developing the ATMO-ACCESS project?

Stéphane Sauvage – The ATMO-ACCESS project responds to a H2020-INFRAIA call for pilot projects specifically opened for certain research infrastructure (RI) targeted by the call, to facilitate access for a wide community of users and develop innovative access services that are harmonized at the European level.  

IMT Nord Europe’s participation in this project is connected to its significant involvement in the ACTRIS (Aerosol, Clouds, and Trace Gases Research InfraStructure) RI. ACTRIS is a distributed RI bringing together laboratories of excellence and observation and exploration platforms, to support research on climate and air quality. It helps improve understanding of past, present and future changes in atmospheric composition and the physico-chemical processes that contribute to regional climate variability

What is the goal of ATMO-ACCESS?

S.S. – ATMO-ACCESS is intended for the extended atmospheric science community. It involves three RI: ACTRISICOS and IAGOS, combining stationary and mobile observation and exploration platforms, calibration centers and data centers. It’s a pilot project aimed at developing a new model of integrating activities for this infrastructure, in particular by providing a series of recommendations for harmonized, innovative access procedures to help establish a sustainable overall framework .

What resources will be used to reach this goal?

S.S. – The project has received €15 million in funding , including €100 K for IMT Nord Europe where four research professors and a research engineer are involved. ATMO-ACCESS will provide scientific and industrial users with physical and remote access to 43 operational European atmospheric research facilities, including ground observation stations and simulation chambers as well as mobile facilities and calibration centers which are essential components of RI.

Why is it important to provide sustainable access to research facilities in the field of atmospheric science?

Véronique Riffault – The goal  is to optimize the use of large research facilities, pool efforts and avoid duplication for streamlining and environmental transition purposes, while promoting scientific excellence and maintaining a high level in the transfer of knowledge and expertise, international collaborations, training for young scientists and the contribution of RI to innovative technologies and economic development.

What role do IMT Nord Europe researchers play in this consortium?

V.R. – IMT Nord Europe researchers are responsible for developing virtual training tools for the users of these research facilities and their products. Within this scientific community, IMT Nord Europe has recognized expertise in developing innovative learning resources (Massive Open Online Course-MOOC, serious games), based on the resources the school has already created in collaboration with its Educational Engineering center, in particular a first MOOC in English on the causes and impacts of air pollution, and a serious game, which should be incorporated into a second module of this MOOC currently in development.

As part of ATMO-ACCESS, a pilot SPOC (Small Private Online Course) will present the benefits and issues related to this infrastructure and a serious game will apply the data proposed by observatories and stored in data centers, while video tutorials for certain instruments or methodologies will help disseminate good practices.

Who are your partners and how will you collaborate scientifically?

V.R. – The project is coordinated by CNRS and brings together 38 partner institutions from 19 European countries. We’ll be working with scientific colleagues from a variety of backgrounds: calibration centers responsible for ensuring measurement quality, data centers for the technical development of resources,  and of course, the community as a whole to best respond to expectations and  engage in a continuous improvement process. In addition to the academic world, other users will be able to benefit from the tools developed through the ATMO-ACCESS project: major international stakeholders and public authorities (ESA, EEA, EUMETSAT, EPA, governments, etc.) as well as the private sector.

The project launch meeting has just been held. What are the next important steps?

V.R. – That’s right, the project was launched in mid-May. The first meeting for the working group in which IMT Nord Europe is primarily involved is scheduled for after the summer break. Our first deliverable will be the interdisciplinary SPOC for atmospheric science, planned for less than two years from now. The project will also launch its first call for access to RI intended for atmosphere communities and beyond.

Interview by Véronique Charlet

Also read on I’MTech

fission spin, nucléaire

Nuclear fission reveals new secrets

Almost 80 years after the discovery of nuclear fission, it continues to unveil its mysteries. The latest to date: an international collaboration has discovered what makes the fragments of nuclei spin after fission. This offers insights into how atom nuclei work and into improving our future nuclear power plants.

Take the nuclei of uranium-238 (the ones used in nuclear power plants), bombard them with neutrons, and watch how they break down into two nuclei of different sizes. Or, more precisely, observe how these fragments spin. This is, in short, the experiment conducted by researchers from 37 institutes in 16 countries, led by the Irène Joliot-Curie Laboratory in Orsay, in the Essonne department. Their findings, which offer insights into nuclear fission, have been published in the journal Nature. Several French teams took part in this discovery.  

The mystery of spinning nuclei

But why is there a need to conduct this kind of experiment? Don’t we understand fission perfectly, since the phenomenon was discovered in the late 1930s by German chemists Otto Hahn and Fritz Strassmann, and Austrian physicist Lise Meitner? Aren’t there hundreds of nuclear fission reactors around the world, that allow us to understand everything? In a word – no. Some mysteries still remain, and among them is the spin of nucleus fragments.  The spin is the equivalent, in the quantum world, of angular momentum. This is more or less how the nucleus spins like a top.

Even when the original nucleus is not spinning, the nuclei resulting from fission still spin. How do they acquire this angular momentum? What generates this rotation? Up to now, there had been two competing hypotheses. The first, supported by the majority of physicists, was that this spin is created before fission. In this case, there must be a correlation between the spins of the two fragments. The second was that the spin of the fragments is caused after fission, and that these spins are therefore independent of each other. The findings by the 37 teams are decisive: the second hypothesis is correct.

184 detectors and 1,200 hours of radiation

We have to think of the nucleus like a liquid drop,” explains Muriel Fallot, a researcher at Subatech (a joint laboratory affiliated to IMT Atlantique, CNRS and University of Nantes), who took part in the experiment. “When it is struck by the neutron, it splits and each fragment is deformed, like a drop if it received an impact. It is when the fragment attempts to return to its spherical shape to acquire greater stability that the energy released is converted into heat and rotational energy.”

To achieve these results, the teams irradiated not only uranium-238, but also thorium-232, two nuclei that can split when they collide with a neutron (this is referred to as fissile nuclei). And this was carried out over 1,200 hours, between February and June 2018. These fragments dissipate the energy accumulated in the form of gamma radiation.  This is detected using 184 detectors placed around the bombarded nuclei.  Yet, depending on the fragments’ spin, the photons do not arrive at the same angle. An analysis of the radiation therefore makes it possible to trace the fragments’ spin. These experiments were conducted at the ALTO accelerator located in Orsay.  

Better understanding the strong interaction

These findings, which offer important insights into the fundamental physics of nuclear fission, will now be analyzed by theoretical physicists from around the world. Certain theoretical models will have to be abandoned, while others will incorporate this data to explain fission quantitatively. They should physicists to better predict the stability of radioactive nuclei.

Today, we are able to predict the lifetime of some heavy nuclei, but not all of them,” says Muriel Fallot. “The more unstable they are, the less we are able to predict them. This research will help us better understand the strong interaction, that which binds the protons and neutrons within the nuclei. Because this strong interaction depends on the spin.”

Applications for reactors of the future

This new knowledge will help researchers working on producing nuclei that are “exotic,”  very heavy,  or with a large excess of protons compared to neutrons (or the reverse). Will these findings lead to the production of new, even heavier nuclei? They would provide food for thought for theorists to further understand nuclear interactions within nuclei.

In addition to being of interest at the fundamental level, these findings have important applications for the nuclear industry.  In a nuclear power plant, a nucleus obtained from fission and which “spins quickly” gives off a lot of energy in the form of gamma radiation.  This can damage certain materials such as fuel sheaths. Yet, “We don’t know how to accurately predict this energy dissipation. There is up to a 30% gap between the calculations and the experiments,” says Muriel Fallot. “That has an impact on the design of these materials.”  While current reactors are managed well based on the experience acquired, these findings will be especially useful for more innovative future reactors.

Cécile Michaut

values conception, collective design, valeurs

Learning to incorporate values in collective design

Designing projects implies that individuals or groups must pool their values to collaborate effectively. But the various parties involved may be guided by diverging value systems, making it difficult to find compromises and common solutions. Françoise Détienne and Michael Baker, researchers at Télécom Paris, explain how the role of values in collective design can be understood.

How is a value defined in the field of collective design?

Françoise Détienne: In general, the concept of values refers to principles or beliefs that guide individuals’ actions and choices. Put that way, any preference might be seen as a value, so we must limit the definition to the ethical dimension in choices, connected to social and human aspects. The notions of inclusion or privacy protection are examples of these kinds of values.

Michael Baker: Certain notions may be considered absolute values in broad terms – like freedom for example – but they can be divided into different nuances, such as freedom of expression or freedom of choice.  And some terms or expressions are subject to implicit value judgments. For example, the word “business” may, in certain contexts, express a negative value judgment, although it refers to something neutral from a values perspective. In order to identify the underlying values in interactions produced in collective design situations, we must therefore go beyond language by taking into account the context in which statements are made.

How can we understand the role of values in the design process?

FD: Most of the current approaches are based on the concepts of Value Sensitive Design (VSD), which consider values to be discrete and independent criteria that must simply be added to the other types of design criteria.  Most of the time, however, individual and collective values are organized into systems that we refer to as ideologies. Here, ideologies mean the set of values underlying individual and collective  viewpoints. We have proposed a new approach called Ideologically Embedded Design (IED), which differentiates between several levels at which values (systems) operate: the form of participation and its underlying principles, the evolution of the design and decision-making process, the group or community involved in the process and its production. This approach also emphasizes the interactions and the possible co-evolution between these levels.

How has the understanding of the role of values in design evolved?

MB : Up to now, values in design have been analyzed based on the objects or physical infrastructure resulting from projects, which reflect certain political and social choices. The analyses carried out based on these objects allowed us to extract values through an ex-post deconstruction. But the current design ergonomics movement seeks instead to analyze how values come into play in the design process and how to deal with value conflicts.

What are some organizations where thinking about values in advance is a priority?

FD: In general, the design of collaborative organizations is rooted in strong values. Participatory housing, which aims to implement shared governance systems, is a good example. The considerations of the individuals involved focus primarily on how they must be organized, based on values that are in line with sharing, such as respect, tolerance and equity in decision-making. In communities like these, the stakes of such values are high, since the goal is to live together successfully.

MB: Many online communities give significant thought to values. One of the best examples is Wikipedia. The Wikipedia community is based on values such as open access to knowledge, free participation of contributors, and neutrality of point of view. Should disagreements rooted in opposing value systems arise, there is not any real way to “resolve” the conflict. In this case, to represent the diversity of viewpoints, the conflict may be handled by dividing the text into different sections, each of which reflects a different viewpoint. For example, an article on “Freud” may be divided into sections that represent the topic from the viewpoint of behavioral psychologists, neuropsychologists, psychoanalysts etc.

Are there discrepancies at times between the values promoted or upheld by an organization and the way they are applied on a concrete level?

MB: There is, indeed, a disconnect at times between the values advanced by an organization and the way they are actually implemented. For example, the notion of “collaboration” may be put forth as a positive value, with various rhetorical uses. For the last decade or so, this term has had a positive connotation and is sometimes used for image and marketing purposes, along the same lines as  greenwashing.  Research is also being carried out on the possible differences and tensions between an organization’s institutional discourse and how groups actually work within the organization.

Are there conflicting values within the same organization at times?

FD: At a certain level of definition of values, this is often the case.  An important issue is clarity in the definition of values during discussions and debates, since each individual may have a different interpretation. So it’s important to support the co-construction of the meaning of values through dialogue, and identify whether or not there are truly competing values.

MB: In discussions about a design, viewpoints must evolve in order to reach a compromise, but that does not mean that each individual’s ideologies will change drastically over time. Almost by definition, it seems, values are stable and typically change only very slowly (except through a radical “conversion”).  So we must understand each individual’s underlying ideologies and frame discussions about the decision-making process by taking them into account. For example, it’s helpful to set out in advance the ways in which the process is collaborative or participative, and if there must be equitable participation between the various stakeholders. The organizational framework is also very  value-oriented.

What are some concrete methods that can help improve collaboration?

FD: Various methods can be applied to improve the alignment and compromise of values within a group. While approaches such as VSD help identify values, ensuring that debates are constructive is not easy. We propose methods from constructive ergonomics such as role playing, organizational simulation and imagining use situations, as well as reflective methods. For example, self-confrontation techniques can be put in place by filming a working group and then having the group members watch the video. This gives them the opportunity to think in a structured way about the  respective underlying values that guided their collective activity. Visualization tools can also help resolve such debates.

How can conflicts be resolved in the event of disagreements about values?

FD : In order to resolve conflicts that may arise, the use of a debate moderator who has been trained in advance for this role can prove to be very helpful. What are referred to as “avoidance” strategies may also be used, such as momentarily redirecting the discussion toward more practical questions, to avoid crystallizing conflicts and opposing viewpoints.

MB: It’s also important to redirect discussion toward compromises that allow different values to coexist. To do so, it can be helpful to bring the debate back to a level focusing on more general values. Sometimes, the more individuals specify what they mean by a value, the more viewpoints may oppose and lead to conflict. 

FD: And last but not least, this leads us to rethink the timeframe for design activity to allow time for co-construction and evolution —which will in all likelihood be slow— of values, negotiation and, possibly, to leave conflict resolution open. The emphasis is then not on producing a solution but on the process itself.

By Antonin Counillon

Bitcoin crash: cybercrime and over-consumption of electricity, the hidden face of cryptocurrency

Donia Trabelsi, Institut Mines-Télécom Business School ; Michel Berne, Institut Mines-Télécom Business School et Sondes Mbarek, Institut Mines-Télécom Business School

Wednesday 19 May will be remembered as the day of a major cryptocurrency crash: -20% for  dogecoin, -19% for ethereum, -22% for definity, the supposedly-infinite blockchain that was recently launched with a bang. The best-known of these currencies, bitcoin, limited the damage to 8.5% (US $39,587) after being down by as much as 30% over the course of the day. It is already down 39% from its record value reached in April.

L’attribut alt de cette image est vide, son nom de fichier est Elon-Musk-1.jpg.
Elon Musk has gone from being seen as an idol to a traitor in the cryptocu-rrency market. Commons.wikimedia.org

Very few of the 5,000 cryptocurrencies recorded today have experienced growth. The latest ones to be launched, “FuckElon” and “StopElon”, say a lot about the identity of the individual considered to be responsible for this drop in prices set off over a week ago.

The former idol of the cryptocurrency world and iconic leader of Tesla Motors, Elon Musk, now seems to be seen as a new Judas by these markets. The founders of “StopElon” have even stated that their aim is to drive up the price of their new cryptocurrency in order to buy shares in Tesla and oust its top executive. However, bitcoin’s relatively smaller drop seems to be attributed to its reassuring signals.  

Elon Musk sent shockwaves rippling through the crypto world last week when he announced that it would no longer be possible to pay for his cars in bitcoin, reversing the stance he had taken in March. He even hinted that Tesla may sell all of its bitcoins. As the guest host of the Saturday Night Live comedy show in early May, he had already caused the dogecoin to plummet, though he had appeared on the show to support it, by referring to it as a “hustle” during a sketch.

 

The reason for his change of heart? The fact that it is harmful to the planet, as transactions using this currency require high electricity consumption. “Cryptocurrency is a good idea on many levels and we believe it has a promising future, but this cannot come at great cost to the environment,” stated Musk, who is also the head of the SpaceX space projects.

China also appears to have played a role in Wednesday’s events. As the country is getting ready to launch a digital yuan, its leaders announced that financial institutions would be prohibited from using cryptocurrency. “After Tesla’s about-face, China twisted the knife by declaring that virtual currencies should not and cannot be used in the market because they are not real currencies,” declared Fawad Razaqzada, analayst at Thinkmarkets, to AFP yesterday.

While a single man’s impact on the price of these assets – which have seen a dramatic rise over the course of a year – may be questioned, his recent moves and about-face urge us to at least examine the ethical issues they raise. Our research has shown that there at least two categories of issues.

The darknet and ransomware

The ethical issues surrounding cryptocurrencies remain closely related to the nature and very functioning of these assets. Virtual currencies are not associated with any governmental authority or institution. The bitcoin system was even specifically designed to avoid relying on conventional trusted intermediaries, such as banks, and escape the supervision of central banks. The value of a virtual currency therefore relies entirely, in theory, on the trust and honesty of its users, and on the security of an algorithm that can track all of the transactions.

Yet, due to their anonymity, lack of strict regulation and gaps in infrastructure, cryptocurrencies also appear to be likely to attract groups of individuals who seek to use them in a fraudulent way. Regulatory concerns focus on their use in illegal trade (drugs, hacking and theft, illegal pornography), cyberattacks and their potential for funding terrorism, laundering money and evading taxes.

Illegal activities accounted for no less than 46% of bitcoin transactions from 2009 to 2017, amounting to US $76 billion per year over this period, which is equivalent to the scale of US and European markets for illegal drugs. In April 2017, approximately 27 million bitcoin market participants were using bitcoin primarily for illegal purposes.

One of the best-known examples of cybercrime involving cryptocurrency is still the “Silk Road.”  In this online black marketplace dedicated to selling drugs on the darknet, the part of the internet that can only be accessed with specific protocols, payments are made exclusively in cryptocurrencies.  

In 2014, at a time when the price of the bitcoin was around US $150, the FBI’s seizure of over US $4 million in bitcoins on the Silk Road gives an idea of the magnitude of the problem facing regulators. At the time, the FBI estimated that this sum accounted for nearly 5% of the total bitcoin economy.

Cryptocurrencies have also facilitated the spread of attacks using ransomware, malware that blocks companies’ access to their own data, and will only unblock it in exchange for a cryptocurrency ransom payment. A study carried out by researchers at Google revealed that victims paid over US $25 million in ransom between 2015 and 2016. In France, according to a Senate report submitted in July 2020, such ransomware attacks represent 8% of requests for assistance from professionals on the cybermalveillance.gouv.fr website and 3% of requests from private individuals.

Energy-intensive assets

The main cryptocurrencies use a large quantity of electricity for mining, meaning IT operations in order to make them and verify transactions. The two main virtual currencies, bitcoin and ethereum, require complicated calculations that are extremely energy-intensive.

According to Digiconomist, for bitcoin, the peak energy consumption was between 60 and 73 TWh in October 2018. On an annualized basis, in mid-April 2021, this figure is somewhere between 50 and 120 TWh, which is higher than the energy consumption of a country such as Kazakhstan. These figures are even more staggering when they are given per transaction: on 6 May 2019, the figure was 432 KWh per transaction and over 1,000 KWh in mid-April 2021, which is equivalent to the annual consumption of a 30m2 studio apartment in France.

A comparison is often made with the Visa electronic payment system, which requires roughly 300,000 less energy consumption than bitcoin for each transaction. The figures cannot be strictly compared, but clearly show that bitcoin transactions are extremely energy-intensive compared to routine electronic transactions.

How can we find a balance?

There are solutions to reduce the cost and energy impact of bitcoins, such as using green energy or increasing the energy efficiency of mining computers.

However, computer technology must still be improved to make this possible. Most importantly, the miners’ reward for mining new bitcoins and verifying transactions is expected to decrease in the future, forcing them to consume more energy to ensure the same level of income.

The initiators of this technology consider that the innovation offered by bitcoin promotes a free world market and connects the world financially. However, it remains a challenge to find the right balance between promoting an innovative technology and deterring the crime and reducing the ecological impact associated with it.

Donia Trabelsi, associate professor of finance, Institut Mines-Télécom Business SchoolMichel Berne, Economist, director of training (retired), Institut Mines-Télécom Business School and Sondes Mbarek, associate professor of finance, Institut Mines-Télécom Business School

This article was republished from the The Conversation under a Creative Commons license. Read the original article (in French).

Thermiup

ThermiUp: a new heat recovery device

ThermiUP helps meet the challenge of energy-saving in buildings. This start-up, incubated at IMT Atlantique, is set to market a device that transfers heat from grey water to fresh water. Its director, Philippe Barbry, gives us an overview of the system.

What challenges does the start-up ThermiUp help meet?

Philippe Barbry: Saving energy is an important challenge from a societal point of view, but also in terms of regulations. In the building industry, there are increasingly strict thermal regulations. The previous regulations were established in 2012, while the next ones will come into effect in 2022 and will include CO2 emissions related to energy consumption. New buildings must meet current regulations. Our device reduces energy needs for heating domestic water, and therefore helps real estate developers and social housing authorities comply with regulations.

What is the principle behind ThermiUP?

PB: It’s a device that exchanges energy between grey water, meaning little-polluted waste water from domestic use, and fresh water. The exchanger is placed as close as possible to the domestic water outlet so that this water loses a minimum of heat energy. The exchanger connects the water outlet pipe with that of the fresh water supply.

On average, water from a shower is at 37°C and cools down slightly at the outlet: it is around 32°C when it arrives in our device. Cold water is at 14°C on average. Our exchanger preheats it to 25°C. Showers represent approximately 80% of the demand for domestic hot water and the exchanger makes it possible to save a third of the energy required for the total domestic hot water production.

Is grey water heat recovery an important energy issue in the building sector?

PB: Historically, most efforts have focused on heating and insulation for buildings. But great strides have been made in this sector and these aspects now account for only 30% of energy consumption in new housing units. As a result, domestic hot water now accounts for 50% of these buildings’ energy consumption.  

What is the device’s life expectancy?

PB: That’s one of the advantages of our exchanger: its life expectancy is equivalent to that of a building, which is considered to be 50 years. It’s a passive system, which doesn’t require electronics,  moving parts or a motor. It is based simply on the laws of gravity and energy transformation. It can’t break down, which represents a significant advantage for real estate developers. ThermiUP reduces energy demand and can also be compatible with other systems such as solar.  

How does your exchanger work?

PB: It is not a traditional heat plate exchanger, since that would get dirty too quickly. Our research and development was based on other types of exchangers. It is a device made of copper, which is an easily recycled material. We optimized the prototype for exchange and its geometry along with its industrial manufacturing technique for two years at IMT Atlantique. But I can’t say more about that until it becomes available on the market in the next few months.

Do you plan to implement this device in other types of housing than new buildings?

PB: For now, with our device, we only plan to target the new building market which is a big market since there are approximately 250,000 multiple dwelling housing units a year in France. In the future, we’ll work on prototypes for individual houses as well as for the renovation sector.

Learn more about ThermiUp

By Antonin Counillon

Graphene, or the expected revolution in electronics: coming soon

Thibaut LalireIMT Mines Alès – Institut Mines-Télécom

“Material of the 21st century,” a “revolutionary material”: these are some of the ways graphene has been described since it was discovered in 2004 by Konstantin Novoselov and Andre Geim. The two scientists’ research on graphene won them the Nobel Prize in Physics in 2010. But how do things stand today – seventeen years after its discovery?

Graphene is known worldwide for its remarkable properties, whether mechanical, thermal or electrical. Its perfect honeycomb structure composed of carbon atoms is the reason why graphene is a high-performance material that can be used in numerous fields. Its morphology, in the form of a sheet just one atom thick, makes it part of the family of 2D materials. Manufacturers have stepped up research on this material since its discovery, and a wide range of applications have been developed, in particular by taking advantage of graphene’s electrical performance. Many sectors are targeted, such as aeronautics, the automotive industry and telecommunications.

Is there graphene in airplanes?

Graphene is used for its status as a champion of electrical conductivity, as well as for its low density and its flexibility. These properties allow it to join the highly exclusive club of materials used in  aeronautics.

Lightning and ice buildup are problems frequently encountered by airplanes at high altitudes. The impact of a lightning strike on a non-conductive surface causes severe damage that can even include the aircraft catching fire. The addition of graphene, with its high electrical conductivity, makes it possible to dissipate this high-energy current. Airplanes are designed in such a way so as to route the current as far as possible from risk areas – fuel tanks and control cables – and therefore prevent loss of control of the aircraft, or even explosion.

L’attribut alt de cette image est vide, son nom de fichier est file-20210531-15-x3uhcl.jpg.
The history of graphene starts here. Umberto/UnsplashCC BY

A coating composed of a resin reinforced with graphene, which is referred to as a “nanocomposite,” is used as an alternative to metal coating, since its low density makes it possible to obtain lighter materials than the original ones – limiting the aircraft’s mass, and therefore, its fuel consumption. But the electrically conductive materials required to dissipate the energy of the lightening strike have the drawback of reflecting electromagnetic waves, meaning that this kind of material cannot be used for stealth military applications.

To overcome this shortcoming, different forms of graphene have been developed to conserve its electrical conductivity while improving stealth. “Graphene foam” is one of these new structures. The wave penetrates the material, which creates a phenomenon in which the wave is reflected in all directions, trapping it and gradually suppressing its traces. It is not possible for the wave to return to the radar, so the aircraft becomes stealth. This is referred to as electromagnetic shielding.

Graphene for energy storage

Graphene has also become widely used in the field of electrical energy storage.

Graphene is an ideal candidate as an electrode for Li-ion batteries and supercapacitators. Its high electrical conductivity and high specific surface area (corresponding to the available surface on the graphene that can accommodate ions and facilitate the exchange of electrons between the graphene electrode and the lithium) makes it possible to obtain a large “storage capacity.” A large number of ions can easily insert themselves between the graphene sheets, which allows electrons to be exchanged with the current, increasing the battery’s electricity storage capacity, and therefore battery life. The ease with which ions can insert themselves into the graphene electrode and the high electrical conductivity of this material (for rapid electron transfer) result in a battery with a much shorter discharge/charge cycle. Graphene’s high conductivity makes it possible to deliver a great quantity of energy in a very short time, resulting in more powerful supercapacitators. Graphene is also a good thermal conductor, which limits temperature rise in batteries by dissipating the heat.

L’attribut alt de cette image est vide, son nom de fichier est file-20210531-19-yn42ze.jpg.
Electric batteries are increasingly pervasive in our lives. Graphene could help improve their performance. Markus Spiske/UnsplashCC BY

At the industry level, Real Graphene has already developed an external battery that can completely recharge a mobile phone in 17 minutes. In an entirely different industry, Mercedes is working on a  prototype for a car with a battery composed of graphene electrodes, proclaimed to have a range of 700 kilometers for a 15-minute recharge  – at present, these values are quite surprising at first glance, especially for electric vehicles which require batteries with high storage capacity.

Making its way into the field of electronics

One area where graphene has struggled to set itself apart compared to semi-conductors is the field of electronics. Its electronic properties – due to its “band structure” – make it impossible to control electrons and graphene therefore behaves like a semi-metal. This means that the use of graphene for binary  – digital – electronics remains challenging, especially for transistors, which are instead composed of semi-conductors.

In order for graphene to be used in transistors, its band structure must be modified, which usually means degrading its honeycomb structure and other electrical properties. If we want to conserve this 2D structure, the chemical nature of the atoms that make up the material must be modified, for example by using boron nitride or transition metal dichalcogenides, which are also part of the family of 2D materials.

L’attribut alt de cette image est vide, son nom de fichier est file-20210531-16-mbz3hw.png.
Microscopy of the interface between graphene and boron nitride(h-BN). Oak Ridge National Laboratory, FlickrCC BY

If, however, we wish to use graphene, we must target applications in which mechanical properties (flexibility) are also sought, such as for sensors, electrodes and certain transistors reserved for analog electronics, like graphene field-effect transistors. The leading mobile phone companies are also working on developing flexible mobile phone screens for better ergonomics.

The manufacturing of the coming quantum computers may well rely on materials known as “topological insulators.” These are materials that are electrical conductors on their surface, but insulators at their core. Research is now focusing on the topological phase of graphene with electric conduction only at the edges.  

The wide variety of applications for graphene demonstrates the material’s vast potential and makes it possible to explore new horizons in a wide range of fields such as optoelectronics and spintronics.

This material has already proved itself in industry, but has not revolutionized it so far. However, ongoing research allows new fields of application to be discovered every year. At the same time, synthesis methods are continually being developed to reduce the price of graphene per kilogram and obtain a higher-quality material.

Thibaut Lalire, PhD student in material science, IMT Mines Alès – Institut -Télécom

This article has been republished from The Conversation under a Creative Commons license. Read the original article (in French).

Gouvernance des données

Data governance: trust it (or not?)

The original version of this article (in French) was published in the quarterly newsletter no. 20 (March 2021) of the Values and Policies of Personal Information (VP-IP) Chair.

On 25 November 2020, the European Commission published its proposal for the European data governance regulation, the Data Governance Act (DGA) which aims to “unlock the economic and societal potential of data and technologies like artificial intelligence “. The proposed measures seek to facilitate access to and use of an ever-increasing volume of data. As such, the text seeks to contribute to the movement of data between member states of the European Union (as well as with States located outside the EU) by promoting the development of “trustworthy” systems for sharing data within and across sectors.

Part of a European strategy for data

This proposal is the first of a set of measures announced as part of the European strategy for data presented by the European Commission in February 2020. It is intended to dovetail with two other proposed regulations dated on 15 December 2020: the Digital Services Act (which aims to regulate the provision of online services, while maintaining the principle of the prohibition of a surveillance obligation) and the Digital Market Act (which organizes the fight against unfair practices by big platforms against companies who offer services through their platforms). A legislative proposal for the European Health Data Space is expected for the end of 2021 and possibly a “data law.”

The European Commission also plans to create nine shared European data spaces in strategic economic sectors and public interest areas, from the manufacturing industry to energy, or mobility, health, financial data and green deal data. The first challenge to overcome in this new data ecosystem will be to transcend national self-interests and those of the market.  

The Data Governance Act proposal does not therefore regulate online services, content or market access conditions: it organizes “data governance,” meaning the conditions for sharing data, with the market implicitly presumed to be the paradigm for sharing. This is shown in particular by an analysis carried out through the lens of trust (which could be confirmed in many other ways).

The central role of trust

Trust plays a central and strategic role in all of this legislation since the DGA “aims to foster the availability of data for use, by increasing trust in data intermediaries and by strengthening data-sharing mechanisms across the EU.” “Increasing trust”, “building trust”, ensuring a “higher level of trust”, “creating trust”, “taking advantage of a trustworthy environment”, “bringing trust” – these expressions appearing throughout the text point to its fundamental aim.

However, despite the fact that the proposal takes great care to define the essential terms on which it is based (“data“, “reuse”, “non-personal data”, “data holder”, “data user”, “data altruism” etc.), the term “trust,” along with the conditions for ensuring it, are exempt from such semantic clarification – even though “trust” is mentioned some fifteen times.

As in the past with the concept of dignity, which was part of the sweeping declarations of rights and freedoms in the aftermath of the Second World War but was nevertheless undefined –  despite the fact that it is the cornerstone of all bioethical texts, the concept of trust is never made explicit. Lawmakers, and those to whom the obligations established by the legal texts are addressed, are expected to know enough about what dignity and trust are to implicitly share the same understanding. As with the notion of time for Saint Augustine, everyone is supposed to understand what it is, even though they are unable to explain it to someone else.

While some see this as allowing for a certain degree of “flexibility” to adapt the concept of trust to a wide range of situations and a changing society, like the notion of privacy, others see this vagueness – whether intentional or not – at best, as a lack of necessary precision, and at worst, as an undeclared intention.

The implicit understanding of trust

In absolute terms, it is not very difficult to understand the concept of trust underlying the DGA (like in the Digital Services Act in which the European Commission proposes, among other things, a new mysterious category of “trusted flaggers“). To make it explicit, the main objectives of the text must simply be examined more closely.

The DGA represents an essential step for open data. The aim is clearly stated: to set out the conditions for the development of the digital economy by creating a single data market. The goal therefore focuses on introducing a fifth freedom: the free movement of data, after the free movement of goods, services, capital and people.  

While the GDPR created a framework for personal data protection, the DGA proposal intends to facilitate its exchange, in compliance with all the rules set out by the GDPR (in particular data subjects’ rights and consent when appropriate).

The scope of the proposal is broad.

The term data is used to refer to both personal data and non-personal data, whether generated by public bodies, companies or citizens. As a result, interaction with the personal data legislation is particularly significant. Moreover, the DGA proposal is guided by principles for data management and re-use that were developed for research data. The “FAIR” principles for data stipulate that this data must be easy to find, accessible, interoperable and re-usable, while providing for exceptions that are not listed and unspecified at this time.

To ensure trust in the sharing of this data, the category of “data intermediary” is created, which is the precise focus of all the political and legal discourse on trust. In the new “data spaces” which will be created (meaning beyond those designated by the European Commission), data sharing service providers will play a strategic role, since they are the ones who will ensure interconnections between data holders/producers and data users.

The “trust” which the text seeks to increase works on three levels:

  1. Trust among data producers (companies, public bodies data subjects) to share their data
  2.  Trust among data users regarding the quality of this data
  3. Trust among trustworthy intermediaries in the various data spaces

Data intermediaries

This latter group emerges as organizers for data exchange between companies (B2B) or between individuals and companies (C2B). They are the facilitators of the single data market. Without them, it is not possible to create it from a technical viewpoint or make it work. This intermediary position allows them to have access to the data they make available; it must be ensured that they are impartial.

The DGA proposal differentiates between two types of intermediaries: “data sharing service providers,” meaning those who work “against remuneration in any form”  with regard to both personal and non-personal data (Chapter III) and “data altruism organisations” who act “without seeking a reward…for purposes of general interest such as scientific research or improving public services” (Chapter VI).

For the first category, the traditional principle of neutrality is applied.

To ensure this neutrality, which “is a key element to bring trust, it is therefore necessary that data sharing service providers act only as intermediaries in the transactions, and do not use the data exchanged for any other purpose”. This is why data sharing services must be set up as legal entities that are separate from other activities carried out by the service provider in order to avoid conflicts of interest. In the division of digital labor, intermediation becomes a specialization in its own right. To create a single market, we fragment the technical bodies that make it possible, and establish a legal framework for their activities.

In this light, the real meaning of “trust” is “security” – security for data storage and transmission, nothing more, nothing less. Personal data security is ensured by the GDPR; the security of the market here relates to that of the intermediaries (meaning their trustworthiness, which must be legally guaranteed) and the transactions they oversee, which embody the effective functioning of the market.

From the perspective of a philosophical theory of trust, all of the provisions outlined in the DGA are therefore meant to act on the motivation of the various stakeholders, so that they feel a high enough level of trust to share data. The hope is that a secure legal and technical environment will allow them to transition from simply trusting in an abstract way to having trust in data sharing in a concrete, unequivocal way.

It should be noted, however, that when there is a conflict of values between economic or entrepreneurial freedom and the obligations intended to create conditions of trust, the market wins. 

In the impact assessment carried out for the DA proposal, the Commission declared that it would choose neither a high-intensity regulatory intervention option (compulsory certification for sharing services or compulsory authorization for altruism organizations), nor a low-intensity regulatory intervention option (optional labeling for sharing services or voluntary certification for altruism organizations). It opted instead for a solution it describes as “alternative” but which is in reality very low-intensity (lower even, for example, than optional labeling in terms of guarantees of trust). In the end, a notification obligation with ex post monitoring of compliance for sharing services was chosen, along with the simple possibility of registering as an “organisation engaging in data altruism.”

It is rather surprising that the strategic option selected includes so few safeguards to ensure the security and trust championed so frequently by the European Commission champion in its official communication.

An intention based on European “values”

Margrethe Vestager, Executive Vice President of the European Commission strongly affirmed this: “We want to give business and citizens the tools to stay in control of data. And to build trust that data is handled in line with European values and fundamental rights.”

But in reality, the text’s entire reasoning shows that the values underlying the DGA are ultimately those of the market – a market that admittedly respects fundamental European values, but that must entirely shape the European data governance model. This offers a position to take on the data processing business model used by the major tech platforms. These platforms, whether developed in the Silicon Valley ecosystem or another part of the world with a desire to dominate, have continued to gain disproportionate power in light of their business model. Their modus operandi is inherently based on the continuous extraction and complete control of staggering quantities of data.

The text is thus based on a set of implicit reductions that are presented as indisputable policy choices. The guiding principle, trust, is equated with security, meaning security of transactions. Likewise, the European values as upheld in Article 2 of the Treaty on European Union, which do not mention the market, are implicitly related to those that make the market work. Lastly, governance, a term that has a strong democratic basis in principle, which gives the DGA its title, is equated only with the principles of fair market-based sharing, with the purported aim, among other things, to feed the insatiable appetite of “artificial intelligence”.

As for “data altruism,” it is addressed in terms of savings in transaction costs (in this case, costs related to obtaining consent), and the fact that altruism can be carried out “without asking for remuneration” does not change the market paradigm: a market exchange is a market exchange, even when it’s free.

By choosing a particular model of governance implicitly presented as self-evident, the Commission  fails to recognize other possible models that could be adopted to oversee the movement of data.  Just a few examples that could be explored and which highlight the many overlooked aspects of the text, are:

  1.  The creation of a public European public data service
  2. Interconnecting the public services of each European state (based on the eIDAS or Schengen Information System (SIS) model; see also France’s public data service, which presently applies to data created as part of public services by public bodies)
  3. An alternative to a public service: public officials, like notaries or bailiffs, acting under powers delegated by a level of public authority
  4. A market-based alternative: pooling of private and/or public data, initiated and built by private companies.

What kind of data governance for what kind of society?

This text, however, highlights an interesting concept in the age of the “reign of data”: sharing. While data is trivially understood as being the black gold of the 21st century, the comparison overlooks an unprecedented and essential aspect: unlike water, oil or rare metals, which are finite resources, data is an infinite resource, constantly being created and ever-expanding.

How should data be pooled in order to be shared?

Should data from the public sector be made available in order to transfer its value creation to the private sector? Or should public and private data be pooled to move toward a new sharing equation? Will we see the emergence of hybrid systems of values that are evenly distributed or a pooling of values by individuals and companies? Will we see the appearance of a “private data commons”? And what control mechanisms will it include?

Will individuals or companies be motivated to share their data? This would call for quite a radical change in economic culture.

The stakes clearly transcend the simple technical and legal questions of data governance. Since the conditions are those of an infinite production of data, these questions make us rethink the traditional economic model.

It is truly a new model of society that must be discussed. Sharing and trust are good candidates for rethinking the society to come, as long as they are not reduced solely to a market rationale.

The text, in its current form, certainly offers points to consider, taking into account our changing societies and digital practices. The terms, however, while attesting to worthwhile efforts for categorization adapted to these practices, require further attention and conceptual and operational precision.   

While there is undoubtedly a risk of systematic commodification of data, including personal data, despite the manifest wish for sharing, it must also be recognized that the text includes possible advances.  The terms of this collaborative writing  are up to us – provided, of course, that all of the stakeholders are consulted, including citizens, subjects and producers of this data.


Claire Levallois-Barth, lecturer in Law at Télécom Paris, coordinator of the VP-IP chair, co-founder of the VP-IP chair.

Mark Hunyadi, professor of moral and political philosophy at the Catholic University of Louvain (Belgium), member of the VP-IP chair.

Ivan Meseguer, European Affairs, Institut Mines-Télécom, co-founder of the VP-IP chair.