3D printing, a revolution for the construction industry?

Estelle Hynek, IMT Nord Europe – Institut Mines-Télécom

A two-story office building was “printed” in Dubai in 2019, becoming the largest 3D-printed building in the world by surface area: 640 square meters. In France, XtreeE plans to build five homes for rent by the end of 2021 as part of the Viliaprint project. Constructions 3D, with whom I am collaborating for my thesis, printed the walls of the pavilion for its future headquarters in only 28 hours.

Today, it is possible to print buildings. Thanks to its speed and the variety of architectural forms that it is capable of producing, 3D printing enables us to envisage a more economical and environmentally friendly construction sector.

3D printing consists in reproducing an object modeled on a computer by superimposing layers of material. Also known as “additive manufacturing”, this technique is developing worldwide in all fields, from plastics to medicine, and from food to construction.

For the 3D printing of buildings, the mortar – composed of cement, water and sand – flows through a nozzle connected to a pump via a hose. The sizes and types of printers vary from one manufacturer to another. The “Cartesian” printer (up/down, left/right, front/back) is one type, which is usually installed in a cage system on which the size of the printed elements is totally dependent. Another type of printer, such as the “maxi printer”, is equipped with a robotic arm and can be moved to any construction site for the direct in situ printing of different structural components in a wider range of object sizes.

L’attribut alt de cette image est vide, son nom de fichier est file-20210818-25-18klydg.jpg.
Pavilion printed by Constructions 3D in Bruay-sur-l’Escaut. Constructions 3D, provided by the author

Today, concrete 3D printing specialists are operating all over the world, including COBOD in Denmark, Apis Cor in Russia, XtreeE in France and Sika in Switzerland. All these companies share a common goal: promoting the widespread adoption of additive manufacturing for the construction of buildings.

From the laboratory to full scale

3D printing requires mortars with very specific characteristics that enable them to undergo rapid changes.

In fact, these materials are complex and their characterization is still under development: the mortars must be sufficiently fluid to be “pumpable” without clogging the pipe, and sufficiently “extrudable” to emerge from the printing nozzle without blocking it. Once deposited in the form of a bead, the behavior of the mortar must change very quickly to ensure that it can support its own weight as well as the weight of the layers that will be superimposed on it. No spreading or “structural buckling” of the material is permitted, as it could destroy the object. For example, a simple square shape is susceptible to buckling, which could cause the object to collapse, because there is no material to provide lateral support for the structure’s walls. Shapes composed of spirals and curves increase the stability of the object and thus reduce the risk of buckling.

These four criteria (pumpability, extrudability, constructability and aesthetics) define the specifications for cement-based 3D-printing “inks”. The method used to apply the mortar must not be detrimental to the service-related characteristics of the object such as mechanical strength or properties related to the durability of the mortar in question. Consequently, the printing system, compared to traditional mortar application methods, must not alter the performance of the material in terms of both its strength (under bending and compression) and its longevity.

In addition, the particle size and overall composition of the mortar must be adapted to the printing system. Some systems, such as that used for the “Maxi printer”, require all components of the mortar except for water to be in solid form. This means that the right additives (chemicals used to modify the behavior of the material) must then be found. Full-scale printing tests require the use of very large amounts of material.

Initially, small-scale tests of the mortars – also called inks – are carried out in the laboratory in order to reduce the quantities of materials used. A silicone sealant gun can be used to simulate the printing and enable the validation of several criteria. Less subjective tests can then be carried out to measure the “constructable” nature of the inks. These include the “fall cone” test, which is used to observe changes in the behavior of the mortar over time, using a cone that is sunk into the material at regular intervals.

Once the mortars have been validated in the laboratory, they must then undergo full-scale testing to verify the pumpability of the material and other printability-related criteria.

L’attribut alt de cette image est vide, son nom de fichier est file-20210818-27-13hdzxe.jpg.
Mini printer. Estelle Hynek, provided by the author

It should be noted that there are as yet no French or European standards defining the specific performance criteria for printable mortars. In addition, 3D-printed objects are not authorized for use as load-bearing elements of a building. This would require certification, as was the case for the Viliaprint project.

Finding replacements for the usual ingredients of mortar for more environmentally friendly and economical inks

Printable mortars are currently mainly composed of cement, a material that is well known for its significant contribution to CO₂ emissions. The key to obtaining more environmentally friendly and economical inks is to produce cement-based inks with a lower proportion of “clinker” (the main component of cement, obtained by the calcination of limestone and clay), in order to limit the carbon impact of mortars and their cost.

With this in mind, IMT Nord-Europe is working on incorporating industrial by-products and mineral additives into these mortars. Examples include “limestone filler”, a very fine limestone powder; “blast furnace slag”, a co-product of the steel industry; metakaolin, a calcinated clay (kaolinite); fly ash, derived from biomass (or from the combustion of powdered coal in the boilers of thermal power plants); non-hazardous waste incineration (NHWI) bottom ash, the residue left after the incineration of non-hazardous waste, or crushed and ground bricks. All of these materials have been used in order to partially or completely replace the binder, i.e. cement, in cement-based inks for 3D printing.

Substitute materials are also being considered for the granular “skeleton” structure of the mortar, usually composed of natural sand. For example, the European CIRMAP project is aiming to replace 100% of natural sand with recycled sand, usually made from crushed recycled concrete obtained from the deconstruction of buildings.

Numerous difficulties are associated with the substitution of the binder and granular skeleton: mineral additions can make the mortar more or less fluid than usual, which will impact the extrudable and constructable characteristics of the ink, and the mechanical strength under bending and/or compression may also be significantly affected depending on the nature of the material used and the cement component substitution rate.

Although 3D printing raises many issues, this new technology enables the creation of bold architectural statements and should reduce the risks present on today’s construction sites.

Estelle Hynek, PhD student in civil engineering at IMT Nord Europe – Institut Mines-Télécom

This article has been republished from The Conversation under a Creative Commons license. Read the original article (in French).

web browsing

How our Web browsing has changed in 30 years

Victor Charpenay, Mines Saint-Étienne – Institut Mines-Télécom

On August 5, 1991, a few months before I was born, Tim Berners-Lee unveiled his invention, called the “World Wide Web”, to the public and encouraged anyone who wanted to discover it to download the world’s very first prototype Web “browser”. This means that the Web as a public entity is now thirty years old.

Tim Berners-Lee extolled the simplicity with which the World Wide Web could be used to access any information using a single program: his browser. Thanks to hypertext links (now abbreviated to hyperlinks), navigation from one page to another was just a click away.

However, the principle, which was still a research topic at that time, seems to have been undermined over time. Thirty years later, the nature of our web browsing has changed: we are visiting fewer websites but spending more time on each individual site.

Hypertext in the past: exploration

One of the first scientific studies of our browsing behavior was conducted in 1998 and made a strong assumption: that hypertext browsing was mainly used to search for information on websites – in short, to explore the tree structure of websites by clicking. Search engines remained relatively inefficient, and Google Inc. had just been registered as a company. As recently as 2006 (according to another study published during the following year), it was found that search engines were only used to launch one in six browsing sessions, each of which then required an average of a dozen clicks.

L’attribut alt de cette image est vide, son nom de fichier est file-20210906-17-xeytzq.jpg.
Jade boat, China. Metropolitan Museum of Art, archive.org

Today, like most Internet users, your first instinct will doubtless be to “Google” what you are looking for, bypassing the (sometimes tedious) click-by-click search process. The first result of your search will often be the right one. Sometimes, Google will even display the information you are looking for directly on the results page, which means that there will be no more clicks and therefore no more need for hypertext browsing.

To measure this decline of hypertext from 1998 to today, I conducted my own (modest) analysis of browsing behavior, based on the browsing history of eight people over a two-month period (April-May 2021), who sent me their histories voluntarily (no code was hidden in their web pages, in contrast to the practices of other browsing analysis algorithms), and the names of the visited web sites were anonymized (www.facebook.com became *.com). Summarizing the recurrent patterns that emerged from these browsing histories shows not only the importance of search engines, but also the concentration of our browsing on a small number of sites.

Hypertext today: the cruise analogy

Not everyone uses the Web with the same intensity. Some of the histories analyzed came from people who spend the vast majority of their time in front of the screen (me, for example). These histories contain between 200 and 400 clicks per day, or one every 2-3 minutes for a 12-hour day. In comparison, people who use their browser for personal use only perform an average of 35 clicks per day. Based on a daily average of 2.5 hours of browsing, an Internet user clicks once every 4 minutes.

What is the breakdown of these clicks during a browsing session? One statistic seems to illustrate the persistence of hypertext in our habits: three quarters of the websites we visit are accessed by a single click on a hyperlink. More precisely, on average, only 23% of websites are “source” sites, originating from the home page, a bookmark or a browser suggestion.

However, the dynamics change when we analyze the number of page views per website. Indeed, most of the pages visited come from the same sites. On average, 83% of clicks take place within the same site. This figure remains relatively stable over the eight histories analyzed: the minimum is 73%, the maximum 89%. We typically jump from one Facebook page to another, or from one YouTube video to another.

There is therefore a dichotomy between “main” sites, on which we linger, and “secondary” sites, which we consult occasionally. There are very few main sites: ten at the most, which is barely 2% of all the websites a person visits. Most people in the analysis have only two main sites (perhaps Google and YouTube, according to the statistics of the most visited websites in France).

On this basis, we can paint a portrait of a typical hypertext browsing session, thirty years after the widespread adoption of this principle. A browsing session typically begins with a search engine, from which a multitude of websites can be accessed. We visit most of these sites once before leaving our search engine. We always visit the handful of main sites in our browsing session via our search engine, but once on a site, we carry out numerous activities on it before ending the session.

The diagram below summarizes the portrait I have just painted. The websites that initiate a browsing session are in yellow, the others in blue. By analogy with the exploratory browsing of the 90s, today’s browsing is more like a slow cruise on a select few platforms, most likely social platforms like YouTube and Facebook.

L’attribut alt de cette image est vide, son nom de fichier est file-20210831-23-1jlvak1.png.
A simplified graph of browsing behavior; the nodes of the graph represent a website (yellow for a site initiating a browsing session, blue for other sites) and the lines represent one or more clicks from one site toward another (the thickness of the lines is proportional to the number of clicks). Victor Charpenay, provided by the author.

The phenomenon that restricts our browsing to a handful of websites is not unique to the web. This is one of the many examples of Pareto’s law, which originally stated that the majority of the wealth produced was owned by a minority of individuals. This statistical law crops up in many socio-economic case studies.

However, what is interesting here is that this concentration phenomenon is intensifying. The 1998 study gave an average of 3 to 8 pages visited per website. The 2006 survey reported 3.4 page visits per site. The average I obtained in 2021 was 11 page visits per site.

Equip your navigator with a “porthole”

The principle of hypertext browsing is nowadays widely abused by the big Web platforms. The majority of hyperlinks between websites – as opposed to self-referencing links (those directed by websites back to themselves, shown in blue on the diagram above) – are no longer used by humans for browsing but by machines for automatically installing fragments of spyware code on our browsers.

There is a small community of researchers who still see the value of hypermedia on the web, especially when users are no longer humans, but bots or “autonomous agents” (which are programmed to explore the Web rather than remain on a single website). Other initiatives, like Solid – Tim Berners-Lee’s new project – are trying to find ways to give Internet users (humans or bots) more control over their browsing, as in the past.

As an individual, you can monitor your own web browsing in order to identify habits (and possibly change them). The Web Navigation Window browser extension, available online for Chrome and Firefox, can be used for this purpose. If you wish, you could also contribute to my analysis by submitting your own history (with anonymized site names) via this extension. To do so, just follow the corresponding hyperlink.

Victor Charpenay, Lecturer and researcher at the Laboratory of Informatics, Modeling and Optimization of Systems (LIMOS), Mines Saint-Étienne – Institut Mines-Télécom

This article has been republished from The Conversation under a Creative Commons license. Read the original article (in French).

gestion des déchets, waste management

Waste management: decentralizing for better management

Reducing the environmental impact of waste and encouraging its reuse calls for a new approach to its management. This requires the modeling of circuits on a territorial scale, and the improvement of collaboration between public and private actors.

Territorial waste management is one of the fundamental aspects of the circular economy. Audrey Tanguy,1 a researcher at Mines Saint-Étienne, is devoting some of her research to this subject by focusing on the development of approaches to enable the optimal management of waste according to its type and the characteristics of different territories. “The principle is to characterize renewable and local resources in order to define how they can be processed directly on the territory,” explains Audrey Tanguy. Organic waste, for example, should be processed using the shortest possible circuits because it degrades quickly. Current approaches tend to centralize as much waste as possible with a view to its processing, while circular approaches tend towards more local, decentralized circuits. Decentralization can be supported by low-tech technologies, which optimize local recycling or composting in the case of organic waste, especially in the urban environment.

The research associated with waste processing therefore aims to find ways to relocate these flows. Modeling tools can help to spatialize these flows and then provide guidance for decision-makers on how to accommodate local channels. “Traditional waste-processing impact assessment tools assess centralized industrial systems, so we need to regionalize them,” explains Audrey Tanguy. These tools must take the territorial distribution of resources into account, regardless of whether they are reusable. In other words, they must determine which are the main flows that can be engaged in order to recover and transform materials. “It is therefore a question of using the appropriate method to prioritize the collection of materials, and to this end, an inventory of the emission and consumption flows needs to be drawn up within the territory,” states the researcher.

Implementation of strategies in the territories

In order to implement circular economy strategies on a territorial scale, the collaboration of different types of local actors is essential. Beyond the tools required, researchers and the organizations in place can also play an important role by helping the decision-makers to carry out more in-depth investigations of the various activities present in the chosen territory. This enables the definition of collaborative strategies in which certain central stakeholders galvanize the actions of the other actors. For example, business associations or local public-private partnership associations promote policies that support industrial strategies. A good illustration is the involvement of the Macéo association, in partnership with Mines Saint-Étienne, in the implementation of strategies for the recycling and recovery of plastic waste in the Massif Central region. It acts as a central player in this territory and coordinates the various actions by implementing collaborative projects between companies and communities.

The tools also provide access to quantitative data about the value of potential exchanges between companies and enable the comparison of different scenarios based on exchanges. This can be applied to aspects of the pooling of transport services, suppliers or infrastructure. Even if these strategies do not concern core industrial production activities, they lay the foundations for future strategies on a broader scale by establishing trust between different actors.

Reindustrialisation of territories

We assume that in order to reduce our impacts, one of the strategies to be implemented is the reindustrialization of territories to promote shorter circuits,” explains Natacha Gondran,1 a researcher in environmental assessment at Mines Saint-Étienne. “This may involve trade-offs, such as sometimes accepting a degree of local degradation of the measured impacts in exchange for a greater reduction in the overall impact,” the researcher continues.

Reindustrializing territories is therefore likely to favor the implementation of circular dynamics. Collaboration between different actors at the local level could in this way provide appropriate responses to global issues concerning the pressure on resources and emissions linked to human activities. “This is one of the strategies to be put in place for the future, but it is also important to rethink our relationship with consumption in order to reduce it and embrace a more moderate approach,” concludes Natacha Gondran.

1 Audrey Tanguy and Natacha Gondran carry out their research in the framework of the Environment, City and Society Laboratory, a joint CNRS research unit composed of 7 members including Mines Saint-Étienne.

Antonin Counillon

This article is part of a 2-part mini-series on the circular economy.
Read the previous article:

économie circulaire, impact environnemental

Economics – dive in, there is so much to discover!

To effectively roll out circular economy policies within a territory, companies and decision-makers require access to evaluation and simulation tools. The design of these tools, still in the research phase, necessarily requires a more detailed consideration of the impact of human activities, both locally and globally.

The circular economy enables optimization of the available resources in order to preserve them and reduce pressure on the environment,” explains Valérie Laforest,1 a researcher at Mines Saint-Étienne. Awareness of the need to protect the planet began to develop in earnest in the 1990s and was gradually accompanied by the introduction of various key regulations. For example, the 1996 IPPC (Integrated Pollution Prevention and Control) Directive, which Valérie Laforest helped to implement through her research, aims to prevent and reduce the different types of pollutant emissions. More recently, legislation such as the French Law on Energy Transition for Green Growth (2015) and the Anti-Waste Law for a Circular Economy (2021) have reflected the growing desire to take the environment into account when considering anthropic activities. However, to enable industries to adapt to these regulations, it is essential for them to have access to tools derived from in-depth research on the impacts of their activities.

Decision-support tools for actors

To enable actors to comply with the regulations and reduce their impacts on the environment, they need to be provided with tools adapted to issues that are both global and local. Part of the research on the circular economy therefore concerns the development of such tools. The aim is to design models that are precise enough to be able to characterize and evaluate a system on the scale of an individual territory, while also being general enough to be adapted to territories with other characteristics. Fairly general methodological frameworks can therefore be developed, within which it is possible to determine criteria and indicators specific to certain cases or sectors. These tools should provide decision-makers with the information they need to implement their infrastructures.

At Mines Saint-Étienne and in collaboration with Macéo, a team of researchers is focusing on the development of a tool called ADALIE, which aims to characterize the potential of territories. This tool creates maps of different geographical areas showing different criteria, such as the economic or environmental criteria of these territories, as well as the industries established in them and their impacts. Decision-makers can therefore use this mapping tool as the basis for choosing their priority activity areas. “The underlying issue is about being able to ensure that a territory possesses the dimensions required to implement circular economy strategies, and that they are successful,” Valerie Laforest tells us. In its next phase, the ADALIE program then aims to archive experiences of effective territorial practices in order to create databases.

For each territorial study, the research provides a huge volume of different types of information. This data generates models that can then be tested in other territories, which also enables the robustness of the models to be checked according to the chosen indicators. These types of tools help local stakeholders to make decisions on aspects of industrial and territorial economics. “This facilitates reflection on how to develop strategies that bring together several actors affected by different issues and problems within a given territory,” states Valérie Laforest. To this end, it is essential to have access to methodologies that enable the measurement of the different environmental impacts. Two main methods are available.

Measurements of impact on the circular economy

Life cycle analysis (LCA) aims to estimate environmental impacts spanning a large geographical and temporal scale, taking account of issues such as distance transported. LCA seeks to model all potential consumptions and emissions over the entire life span of a system. The models are developed by compiling data from other systems and can be used to compare different scenarios in order to determine the scenario that is likely to have the least impact.

Read more on I’MTech: What is life cycle analysis?

The other approach is the best available techniques (BAT) method. This practice was implemented under the European Industrial Emissions Directive (IPPC then IED) in 1996. It aims to help European companies achieve performance standards equivalent to benchmark values for their consumption and emission flows. These benchmarks are based on data from samples of European companies. The granting or refusal of an operating license depends on the comparison of their performance with the reference sample. BATs are therefore based on European standards and have a regulatory purpose.

BATs are related to companies’ performance in the use phase, i.e. the performance of techniques is closely scrutinized in relation to incoming and outgoing flows during the use phase. LCA, on the other hand, is based on real or modeled data including information from upstream and downstream of this use phase. The BAT and LCA approaches are therefore complementary and not exclusive. For example, between two BAT analyses of a system to ensure its compliance with the regulations, different models of the systems could be created by conducting LCAs in order to determine the technique that has the least impact throughout its entire life cycle.

Planetary boundaries

In addition to quantifying the flows generated by companies, impact measurements must also include the effects of these flows on the environment on a global scale.

To this end, research and practices also focus on the effects of activities in relation to the different planetary boundaries. These boundary levels reflect the capacity of the planet to absorb impacts, beyond which they are considered to have irreversible effects.

The work of Natacha Gondran1 at Mines Saint-Étienne is contributing to the development of methods for assessing absolute environmental sustainability, based on planetary boundaries. “We work on the basis of global limitations, defined in the literature, which correspond to categories of impacts that are subject to thresholds at the global level. If humanity exceeds these thresholds, the conditions of life on Earth will become less stable than they are today. We are trying to implement this in impact assessment tools on the scale of systems such as companies,” she explains. These impacts, such as greenhouse gas emissions, land use, and the eutrophication of water, are not directly visible. They must therefore be represented in order to identify the actions to be taken to reduce them.

Read more on I’MTech: Circular economy, environmental assessment and environmental budgeting

Planetary boundaries are defined at the global level by a community of scientists. Modeling tools enable these boundaries to be used to define ecological budgets that correspond, in a manner of speaking, to the maximum quantity of pollutants that can be emitted without exceeding these global limits. The next challenge is then to design different methods to allocate these planetary budgets to territories or production systems. This makes it possible to estimate the impact of industries or territories in relation to planetary boundaries. “Today, many industries are already exceeding these boundary levels, such as the agri-food industry associated with meat. The challenge is to find local systems that can act as alternatives to these circuits in order to drop below the boundary levels,” explains the researcher. For example, it would be wise to locate livestock production closer to truck farming sites, as livestock effluents could then be used as fertilizer for truck farming products. This could reduce the overall impact of the different agri-food chains on the nitrogen and phosphorus cycles, as well as the impact of transport-related emissions, while improving waste management at the territorial level.

Together, these different tools provide an increasingly extensive methodological framework for ensuring the compatibility of human activities with the conservation of ecosystems.

1 Valérie Laforest and Natacha Gondran carry out their research in the framework of the Environment, City and Society Laboratory, a joint CNRS research unit composed of 7 members including Mines Saint-Étienne.

Antonin Counillon

This article is part of a 2-part mini-series on the circular economy.
Read more:

soins, care

Hospitals put to the test by shocks

Benoît Demil, I-site Université Lille Nord Europe (ULNE) and Geoffrey Leuridan, IMT Atlantique – Institut Mines-Télécom

The Covid-19 crisis has put lasting strain on the health care system, in France and around the world. Hospital staff have had to deal with increasing numbers of patients, often in challenging conditions in terms of equipment and space: a shortage of masks and protective equipment initially, then a lack of respirators and anesthetics, and more recently, overloaded intensive care units.

Adding to these difficulties, logistical problems have exacerbated shortage problems. Under these extreme conditions, and despite all the difficulties, the hospital system has withstood and absorbed the shock of the crisis. “The hospital system did not crack under pressure,” as stated by Étienne Minvielle and Hervé Dumez, co-authors of a report on the French hospital management system during the Covid-19 crisis.

While it is unclear how long such a feat can be maintained, and at what price, we may also ask questions about the resilience and reliability of the health care system. In other words, how can care capacity be maintained at a constant quality when the organization is under extreme pressure?

We sought to understand this in a study conducted over 14 months during a non-Covid period, with the staff of a critical care unit of a university hospital center.

High reliability organizations

The concepts of resilience and reliability, which have become buzzwords in the current crisis, have been studied extensively for over 30 years in organizational science research  – more particularly those focusing on High Reliability Organizations (HRO).

This research has offered insights into the mechanisms and factors that enable complex sociotechnical systems to maintain safety and a constant quality of service, although the risk of failure remains possible, with serious consequences.

The typical example of an HRO is an aircraft carrier. We know that deference to expertise and skills within a working group, permanent learning routines and training explain how it can ensure its primary mission over time. But much less is known about how the parties involved manage the resources required for their work, and how this management affects resilience and reliability.

Two kinds of situations

In a critical care unit, activity is continuous but irregular, both quantitatively and qualitatively. Some days are uneventful, with a low number of patients, common disorders and diseases, and care that does not present any particular difficulties. The risks of the patients’ health deteriorating are of course still present, but remain under control. This is the most frequently-observed context: 80 of the 92 intervention situations recorded and analyzed in our research relate to such a context.

At times, however, activity is significantly disrupted by a sudden influx of patients (for example, following a serious automobile accident), or by a rapid and sudden change in a patient’s condition. The tension becomes palpable within the unit, movements are quicker and more precise, conversations between health care workers are brief and focused on what is happening.

In both cases, observations show differentiated management of resources, whether human, technical or relating to space. To understand these differences, we must draw on a concept that has long existed in organizational theory: organizational slack, which was brought to light in 1963 by Richard Cyert and James March.

Slack for shocks

This important concept in the study of organizations refers to excess resources in relation to optimal operations. Organizations or their members accumulate this slack to handle multiple demands, which may be competing at times.

The life of organizations offers a multitude of opportunities for producing and using slack. Examples include the financial reserves a company keeps on hand “just in case”, the safety stock a production manager builds up, the redundancy of certain functions or suppliers, the few extra days allowed for a project, oversized budgets negotiated by a manager to meet his year-end targets etc. All of these practices, which are quite common in organizations, contribute to resilience in two ways.

First, they make it possible to avoid unpredictable shocks, such as the default of a subcontractor, an employee being out on sick leave,  an unforeseen event that affects a project or a machine breaking down. Moreover, in risk situations, they prevent the disruption of the sociotechnical system by maintaining it in a non-degraded environment.

Second, these practices absorb the adverse effects of shocks when they arise unexpectedly – whether due to a strike or the sudden arrival of patients in an emergency unit.

How do hospitals create slack?

Let us first note that in a critical care unit, the staff produces and uses slack all the time. It comes from negotiations that the head of the department has with the hospital administration to obtain and defend the spaces and staff required for the unit to operate as effectively as possible. These negotiations are far from the everyday care activity, but are crucial for the organization to run effectively.

At the operational level, health care workers also free up resources quickly, in particular in terms of available beds, to accommodate new patients who arrive unexpectedly.  The system for managing the order of priority for patients and their transfer is a method commonly used to ensure that there is always an excess of available resources.

In most cases, these practices of negotiation and rapid rotation of resources make it possible for the unit to handle situations that arise during its activity. At times, however, due to the very nature of the activity, such practices may not suffice. How do health care workers manage in such situations?

Constant juggling

Our observations show that other practices offset the temporary lack of resources.

Examples include calling in the unit’s day staff as well as night staff, or others from outside the unit to “lend a hand”, reconfiguring the space to create an additional bed with the necessary technical equipment or negotiating a rapid transfer of patients to other departments.  

This constant juggling allows health care workers to handle emergency situations that may otherwise overwhelm them and put patients lives in danger. For them, the goal is to make the best use of the resources available, but also to produce them locally and temporarily when required by emergency situations.

Are all costs allowed?

The existence of slack poses a fundamental problem for organizations – in particular those whose activity requires them to be resilient to ensure a high degree of reliability. Keeping unutilized resources on hand “just in case” goes against a managerial approach that seeks to optimize the use of resources, whether human, financial or equipment  – as called for by New Public Management since the 1980s, in an effort to lower the costs of public services.

This approach has had a clear impact on the health care system, and in particular on the French hospital system over the past two decades, as the recent example of problems with strategic stocks of masks at the beginning of the Covid pandemic unfortunately illustrated .

Beyond the hospital, military experts have recently made the same observation, noting that “economic concerns in terms of defense, meaning efficiency, are a very recent idea,” which “conflicts with the military notions of ‘reserve,’ ‘redundancy’ and ‘escalation of force,’ which are essential to operational effectiveness and to what is now referred to as resilience.”

Of course, this quest for optimization does not only apply to public organizations. But it often goes hand in hand with greater vulnerability of the sociotechnical systems involved. In any case, this was observed during the health crisis, in light of the optimization implemented at the global level to reduce costs in companies’ supply chains. 

To understand this, one only needs to look at the recent stranding of the Ever Given. Blocked for a week in the Suez Canal, this giant container paralyzed 10% of global trade for a week. What lessons can be learned  from this?

A phenomenon made invisible in emergencies

First of all, it is important for organizations aiming for high reliability to keep in mind that maintaining slack has a cost, and that that they must therefore identify the systems or sub-systems for which resilience must absolutely be ensured.  The difference between slack that means wasting resources and slack that allows for resilience is a very fine line.

Bearing this cost calls for education efforts, since it must not only be fully agreed to by all of the stakeholders, but also justified and defended.

Lastly, the study we conducted in a critical care unit showed that while slack is produced in part during action, it disappears once a situation has stabilized. 

This phenomenon is therefore largely invisible to managers of hospital facilities. While these micro-practices may not be measured by traditional performance indicators, they nevertheless contribute significantly: this might not be a new lesson, but it is worth repeating to ensure that it is not forgotten.

Benoît Demil, professor of strategic management, I-site Université Lille Nord Europe (ULNE) and Geoffrey Leuridan, research professor, IMT Atlantique – Institut Mines-Télécom

This article has been republished from The Conversation under a Creative Commons license. Read the  original article (in French).

Fake News

A real way to look at fake news

The SARS-CoV2 virus is not the only thing that has spread during the Covid-19 pandemic: fake news has also made its way around the world. Although it existed even before, the unprecedented crisis has paved the way for an explosion of fake news. Anuragini Shirish, a researcher at Institut Mines Télécom Business School, explains the factors at play in this trend and how it could be limited in the future.

Why has the pandemic been conducive to the emergence of fake news?

Anuragini Shirish: At the individual level, fear and uncertainty are psychological factors that have played an important role. People have several fears that pertain to safety of their lives and that of their families, jobs, resources leading to unexplained uncertainty about both the present and the future. As a response to this situation, people try to make sense of the situation and understand what’s going to happen to reassure themselves, from both the health  and economic point of views. To do so, they look for information, regardless of how truthful it is.

How do individuals seek guidance in an unpredictable situation?

AS: The main sources of guidance are institutional resources. One of the important resources is the freedom of the media. In countries like India, the media can be influenced by politicians and people tend not to trust it entirely. In Nordic countries, on the other hand, the media focuses on being as objective as possible and people are taught to adhere to objectivity. When trust in the traditional media is low, as may be the case in France, individuals tend to seek out alternative sources of information. Freedom of the media is therefore an institutional resource: if people have confidence in the strength and impartiality of their media, it tends to lower their level of fear and uncertainty.

Another important resource is the government’s measures to increase economic freedom perceptions. If individuals believe that the government can maintain job security and/or their sources of income throughout the pandemic, including periods of lockdown, this also helps reduce their fear and uncertainty. In countries such as Brazil, India and the United States, this has not been the case.

Lastly, there is the freedom of political expression, which gives individuals the opportunity to express and share their doubts publicly.  But in this case, it tends to foster the emergence of fake news. This is one of the findings of a study we conducted with Shirish Srivastava and Shalini Chandra from HEC Paris and the SP Jain School of Global Management.

How is the lack of confidence in institutions conducive to the emergence and dissemination of fake news?

AS : When people trust institutions, they are less likely to seek information from alternative sources. Conversely, when there is a low level of trust in institutions, people tend to react by seeking out all kinds of information on the internet.

Why and how has fake news spread to such an extent?

AS: In order to verify the information they obtain, people tend to share it with their acquaintances and close friends to get their feedback about the validity of the information. And due to their cognitive biases, people tend to consume and share ideas and beliefs they like, even when they’re aware that the information may be false. Fake news are generally structured to evoke a variety of emotions, leading to strong feelings such as anger, fear, sadness, which also helps it to spread more easily than information presented in a more rational or neutral way. 

Each country has its own characteristics when it comes to the emergence and dissemination of fake news, which also explains why an understanding of institutional resources is helpful to identify the factors that can explain the national level differences at play. The emergence  and dissemination of fake news vary widely from country to country: the inhabitants of a country are far more concerned about what’s happening in their own country. Fake news is therefore highly context-specific.

Where is most fake news found?

AS: The majority of fake news is found on social media. That’s where it spreads the quickest since it is extremely easy to share. Social media algorithms will also display the information that people like the most, therefore increasing their cognitive biases and their desire to share this information. And social media is the number-one media consumed by individuals, due to its ease of mobile access and connectivity available at several countries in the world.

Who creates fake news?

AS: It’s hard to understand the deeper motivations of each individual who creates fake news, since they don’t typically brag about it! Some may do so for economic reasons, by generating “clicks” and the revenue that comes with them. Almost half of fake news is generated for political reasons, to destabilize opposing parties. And sometimes it comes directly from political parties. Uncertain situations like pandemics polarize individuals in society, which facilitates this process. And then there are individuals who may just want to create general confusion, for no apparent economic or political motives.

How can we as individuals contribute to limiting the spread of fake news?

AS: When we aren’t sure about the validity of information, we must not act on it, or share it with others before finding out more. It’s a human tendency to try to verify the legitimacy of information by sharing it, but that’s a bad strategy at a larger scale.  

How can we tell if information may be false?

AS: : First of all, we must learn to think critically and not accept everything we see. We must critically examine the source or website that has posted the information and ask why. There is an especially high level of critical thinking in countries such as Finland or the Netherlands, since these skills are taught at high schools and universities, in particular through media studies classes. But in countries where people are not taught to think critically to the same extent, and trust in the media is low, paradoxically, people are more critical of information that comes from the institutional media than of that which comes from social media. Tools like Disinformation Index or Factcheck.org may be used to verify sources in order to check whether or not information is authentic.  

Is fake news dangerous?

AS: It depends on the news. During the pandemic, certain light-hearted fake news was spread. It didn’t help people solve their problems, but it provided entertainment for those who needed it. For example, there was a tweet that appeared in March 2020 saying that a group of elephants in the Yunnan province in China, had drunk corn wine and fallen asleep, amid the recommendations for social distancing.  This tweet was shared 264,000 times and got 915,500 likes and 5,000 comments. This tweet was later “debunked” (proven to be false) in an article that appeared in National Geographic. This kind of fake news does not have any harmful consequences.  

But other kinds of fake news have had far more serious consequences. First, political fake news generally reduces trust in institutional resources.  It doesn’t offer any solutions and creates more confusion. Paradoxically, this increases fear and uncertainty in individuals and facilitates the dissemination of more fake news, creating a vicious circle! Since it reduces institutional trust, government programs have less of an impact, which also has economic implications. During the pandemic, this has had a major impact on health. Not only because the vaccine campaigns have had less of an effect, but because people self-medicated  based on fake news and died as a result. People’s mental health has also suffered through prolonged exposure to uncertainty, at times leading to mental illness or even suicide. This is also why the term “infodemic” has appeared. 

Is social media trying to fight the spread of fake news?  

AS: During the pandemic, content regulation by the platforms has increased, in particular through  UN injunctions and the gradual implementation of the Digital Service Act. For example, Twitter, Facebook and Instagram are trying to provide tools to inform their users which information may be inauthentic.  The platforms were not prepared for this kind of regulation, and they generated a lot of revenue from the large volume of information being shared, whether or not it was true.  This is changing – let’s hope that this continues over time!

Read more on I’MTech: Digital Service Act: Regulating the content of digital platforms Act 1

What are the levels of institutional control over fake news?

AS: Control over information must be carried out through various approaches since it affects many aspects of society. The government can increase its presence in the media and social media, and improve internet security. There are two ways of doing this: through the law, by punishing the perpetrator of fake news, but also by increasing collective awareness and providing programs to teach people how to verify information. It’s important to put this aspect in place ahead of time, in order to anticipate potential crises that may occur in the future and to monitor collective awareness levels . However, the goal is not to control the freedom of media, on the contrary,  this freedom increases the contribution of independent media, and signals to the citizens that the government seeks to be impartial.

How can we improve people’s relationship with information and institutions in general?

AS: Individuals’ behavior is difficult to change in the long term: new regulations are ultimately violated when people see them as meaningless. So, we must also help citizens find value in the rules of society that may be put in place by the government, in order for them to adhere to them.

By Antonin Counillon

Easier access to research infrastructure for the European atmospheric science community

Improving access to large facilities for research on climate and air quality and optimizing use are the objectives of the European ATMO-ACCESS project. Véronique Riffault and Stéphane Sauvage, researchers at IMT Nord Europe, one of the project’s 38 partner institutions, explain the issues involved.

What was the context for developing the ATMO-ACCESS project?

Stéphane Sauvage – The ATMO-ACCESS project responds to a H2020-INFRAIA call for pilot projects specifically opened for certain research infrastructure (RI) targeted by the call, to facilitate access for a wide community of users and develop innovative access services that are harmonized at the European level.  

IMT Nord Europe’s participation in this project is connected to its significant involvement in the ACTRIS (Aerosol, Clouds, and Trace Gases Research InfraStructure) RI. ACTRIS is a distributed RI bringing together laboratories of excellence and observation and exploration platforms, to support research on climate and air quality. It helps improve understanding of past, present and future changes in atmospheric composition and the physico-chemical processes that contribute to regional climate variability

What is the goal of ATMO-ACCESS?

S.S. – ATMO-ACCESS is intended for the extended atmospheric science community. It involves three RI: ACTRISICOS and IAGOS, combining stationary and mobile observation and exploration platforms, calibration centers and data centers. It’s a pilot project aimed at developing a new model of integrating activities for this infrastructure, in particular by providing a series of recommendations for harmonized, innovative access procedures to help establish a sustainable overall framework .

What resources will be used to reach this goal?

S.S. – The project has received €15 million in funding , including €100 K for IMT Nord Europe where four research professors and a research engineer are involved. ATMO-ACCESS will provide scientific and industrial users with physical and remote access to 43 operational European atmospheric research facilities, including ground observation stations and simulation chambers as well as mobile facilities and calibration centers which are essential components of RI.

Why is it important to provide sustainable access to research facilities in the field of atmospheric science?

Véronique Riffault – The goal  is to optimize the use of large research facilities, pool efforts and avoid duplication for streamlining and environmental transition purposes, while promoting scientific excellence and maintaining a high level in the transfer of knowledge and expertise, international collaborations, training for young scientists and the contribution of RI to innovative technologies and economic development.

What role do IMT Nord Europe researchers play in this consortium?

V.R. – IMT Nord Europe researchers are responsible for developing virtual training tools for the users of these research facilities and their products. Within this scientific community, IMT Nord Europe has recognized expertise in developing innovative learning resources (Massive Open Online Course-MOOC, serious games), based on the resources the school has already created in collaboration with its Educational Engineering center, in particular a first MOOC in English on the causes and impacts of air pollution, and a serious game, which should be incorporated into a second module of this MOOC currently in development.

As part of ATMO-ACCESS, a pilot SPOC (Small Private Online Course) will present the benefits and issues related to this infrastructure and a serious game will apply the data proposed by observatories and stored in data centers, while video tutorials for certain instruments or methodologies will help disseminate good practices.

Who are your partners and how will you collaborate scientifically?

V.R. – The project is coordinated by CNRS and brings together 38 partner institutions from 19 European countries. We’ll be working with scientific colleagues from a variety of backgrounds: calibration centers responsible for ensuring measurement quality, data centers for the technical development of resources,  and of course, the community as a whole to best respond to expectations and  engage in a continuous improvement process. In addition to the academic world, other users will be able to benefit from the tools developed through the ATMO-ACCESS project: major international stakeholders and public authorities (ESA, EEA, EUMETSAT, EPA, governments, etc.) as well as the private sector.

The project launch meeting has just been held. What are the next important steps?

V.R. – That’s right, the project was launched in mid-May. The first meeting for the working group in which IMT Nord Europe is primarily involved is scheduled for after the summer break. Our first deliverable will be the interdisciplinary SPOC for atmospheric science, planned for less than two years from now. The project will also launch its first call for access to RI intended for atmosphere communities and beyond.

Interview by Véronique Charlet

Also read on I’MTech

fission spin, nucléaire

Nuclear fission reveals new secrets

Almost 80 years after the discovery of nuclear fission, it continues to unveil its mysteries. The latest to date: an international collaboration has discovered what makes the fragments of nuclei spin after fission. This offers insights into how atom nuclei work and into improving our future nuclear power plants.

Take the nuclei of uranium-238 (the ones used in nuclear power plants), bombard them with neutrons, and watch how they break down into two nuclei of different sizes. Or, more precisely, observe how these fragments spin. This is, in short, the experiment conducted by researchers from 37 institutes in 16 countries, led by the Irène Joliot-Curie Laboratory in Orsay, in the Essonne department. Their findings, which offer insights into nuclear fission, have been published in the journal Nature. Several French teams took part in this discovery.  

The mystery of spinning nuclei

But why is there a need to conduct this kind of experiment? Don’t we understand fission perfectly, since the phenomenon was discovered in the late 1930s by German chemists Otto Hahn and Fritz Strassmann, and Austrian physicist Lise Meitner? Aren’t there hundreds of nuclear fission reactors around the world, that allow us to understand everything? In a word – no. Some mysteries still remain, and among them is the spin of nucleus fragments.  The spin is the equivalent, in the quantum world, of angular momentum. This is more or less how the nucleus spins like a top.

Even when the original nucleus is not spinning, the nuclei resulting from fission still spin. How do they acquire this angular momentum? What generates this rotation? Up to now, there had been two competing hypotheses. The first, supported by the majority of physicists, was that this spin is created before fission. In this case, there must be a correlation between the spins of the two fragments. The second was that the spin of the fragments is caused after fission, and that these spins are therefore independent of each other. The findings by the 37 teams are decisive: the second hypothesis is correct.

184 detectors and 1,200 hours of radiation

We have to think of the nucleus like a liquid drop,” explains Muriel Fallot, a researcher at Subatech (a joint laboratory affiliated to IMT Atlantique, CNRS and University of Nantes), who took part in the experiment. “When it is struck by the neutron, it splits and each fragment is deformed, like a drop if it received an impact. It is when the fragment attempts to return to its spherical shape to acquire greater stability that the energy released is converted into heat and rotational energy.”

To achieve these results, the teams irradiated not only uranium-238, but also thorium-232, two nuclei that can split when they collide with a neutron (this is referred to as fissile nuclei). And this was carried out over 1,200 hours, between February and June 2018. These fragments dissipate the energy accumulated in the form of gamma radiation.  This is detected using 184 detectors placed around the bombarded nuclei.  Yet, depending on the fragments’ spin, the photons do not arrive at the same angle. An analysis of the radiation therefore makes it possible to trace the fragments’ spin. These experiments were conducted at the ALTO accelerator located in Orsay.  

Better understanding the strong interaction

These findings, which offer important insights into the fundamental physics of nuclear fission, will now be analyzed by theoretical physicists from around the world. Certain theoretical models will have to be abandoned, while others will incorporate this data to explain fission quantitatively. They should physicists to better predict the stability of radioactive nuclei.

Today, we are able to predict the lifetime of some heavy nuclei, but not all of them,” says Muriel Fallot. “The more unstable they are, the less we are able to predict them. This research will help us better understand the strong interaction, that which binds the protons and neutrons within the nuclei. Because this strong interaction depends on the spin.”

Applications for reactors of the future

This new knowledge will help researchers working on producing nuclei that are “exotic,”  very heavy,  or with a large excess of protons compared to neutrons (or the reverse). Will these findings lead to the production of new, even heavier nuclei? They would provide food for thought for theorists to further understand nuclear interactions within nuclei.

In addition to being of interest at the fundamental level, these findings have important applications for the nuclear industry.  In a nuclear power plant, a nucleus obtained from fission and which “spins quickly” gives off a lot of energy in the form of gamma radiation.  This can damage certain materials such as fuel sheaths. Yet, “We don’t know how to accurately predict this energy dissipation. There is up to a 30% gap between the calculations and the experiments,” says Muriel Fallot. “That has an impact on the design of these materials.”  While current reactors are managed well based on the experience acquired, these findings will be especially useful for more innovative future reactors.

Cécile Michaut

values conception, collective design, valeurs

Learning to incorporate values in collective design

Designing projects implies that individuals or groups must pool their values to collaborate effectively. But the various parties involved may be guided by diverging value systems, making it difficult to find compromises and common solutions. Françoise Détienne and Michael Baker, researchers at Télécom Paris, explain how the role of values in collective design can be understood.

How is a value defined in the field of collective design?

Françoise Détienne: In general, the concept of values refers to principles or beliefs that guide individuals’ actions and choices. Put that way, any preference might be seen as a value, so we must limit the definition to the ethical dimension in choices, connected to social and human aspects. The notions of inclusion or privacy protection are examples of these kinds of values.

Michael Baker: Certain notions may be considered absolute values in broad terms – like freedom for example – but they can be divided into different nuances, such as freedom of expression or freedom of choice.  And some terms or expressions are subject to implicit value judgments. For example, the word “business” may, in certain contexts, express a negative value judgment, although it refers to something neutral from a values perspective. In order to identify the underlying values in interactions produced in collective design situations, we must therefore go beyond language by taking into account the context in which statements are made.

How can we understand the role of values in the design process?

FD: Most of the current approaches are based on the concepts of Value Sensitive Design (VSD), which consider values to be discrete and independent criteria that must simply be added to the other types of design criteria.  Most of the time, however, individual and collective values are organized into systems that we refer to as ideologies. Here, ideologies mean the set of values underlying individual and collective  viewpoints. We have proposed a new approach called Ideologically Embedded Design (IED), which differentiates between several levels at which values (systems) operate: the form of participation and its underlying principles, the evolution of the design and decision-making process, the group or community involved in the process and its production. This approach also emphasizes the interactions and the possible co-evolution between these levels.

How has the understanding of the role of values in design evolved?

MB : Up to now, values in design have been analyzed based on the objects or physical infrastructure resulting from projects, which reflect certain political and social choices. The analyses carried out based on these objects allowed us to extract values through an ex-post deconstruction. But the current design ergonomics movement seeks instead to analyze how values come into play in the design process and how to deal with value conflicts.

What are some organizations where thinking about values in advance is a priority?

FD: In general, the design of collaborative organizations is rooted in strong values. Participatory housing, which aims to implement shared governance systems, is a good example. The considerations of the individuals involved focus primarily on how they must be organized, based on values that are in line with sharing, such as respect, tolerance and equity in decision-making. In communities like these, the stakes of such values are high, since the goal is to live together successfully.

MB: Many online communities give significant thought to values. One of the best examples is Wikipedia. The Wikipedia community is based on values such as open access to knowledge, free participation of contributors, and neutrality of point of view. Should disagreements rooted in opposing value systems arise, there is not any real way to “resolve” the conflict. In this case, to represent the diversity of viewpoints, the conflict may be handled by dividing the text into different sections, each of which reflects a different viewpoint. For example, an article on “Freud” may be divided into sections that represent the topic from the viewpoint of behavioral psychologists, neuropsychologists, psychoanalysts etc.

Are there discrepancies at times between the values promoted or upheld by an organization and the way they are applied on a concrete level?

MB: There is, indeed, a disconnect at times between the values advanced by an organization and the way they are actually implemented. For example, the notion of “collaboration” may be put forth as a positive value, with various rhetorical uses. For the last decade or so, this term has had a positive connotation and is sometimes used for image and marketing purposes, along the same lines as  greenwashing.  Research is also being carried out on the possible differences and tensions between an organization’s institutional discourse and how groups actually work within the organization.

Are there conflicting values within the same organization at times?

FD: At a certain level of definition of values, this is often the case.  An important issue is clarity in the definition of values during discussions and debates, since each individual may have a different interpretation. So it’s important to support the co-construction of the meaning of values through dialogue, and identify whether or not there are truly competing values.

MB: In discussions about a design, viewpoints must evolve in order to reach a compromise, but that does not mean that each individual’s ideologies will change drastically over time. Almost by definition, it seems, values are stable and typically change only very slowly (except through a radical “conversion”).  So we must understand each individual’s underlying ideologies and frame discussions about the decision-making process by taking them into account. For example, it’s helpful to set out in advance the ways in which the process is collaborative or participative, and if there must be equitable participation between the various stakeholders. The organizational framework is also very  value-oriented.

What are some concrete methods that can help improve collaboration?

FD: Various methods can be applied to improve the alignment and compromise of values within a group. While approaches such as VSD help identify values, ensuring that debates are constructive is not easy. We propose methods from constructive ergonomics such as role playing, organizational simulation and imagining use situations, as well as reflective methods. For example, self-confrontation techniques can be put in place by filming a working group and then having the group members watch the video. This gives them the opportunity to think in a structured way about the  respective underlying values that guided their collective activity. Visualization tools can also help resolve such debates.

How can conflicts be resolved in the event of disagreements about values?

FD : In order to resolve conflicts that may arise, the use of a debate moderator who has been trained in advance for this role can prove to be very helpful. What are referred to as “avoidance” strategies may also be used, such as momentarily redirecting the discussion toward more practical questions, to avoid crystallizing conflicts and opposing viewpoints.

MB: It’s also important to redirect discussion toward compromises that allow different values to coexist. To do so, it can be helpful to bring the debate back to a level focusing on more general values. Sometimes, the more individuals specify what they mean by a value, the more viewpoints may oppose and lead to conflict. 

FD: And last but not least, this leads us to rethink the timeframe for design activity to allow time for co-construction and evolution —which will in all likelihood be slow— of values, negotiation and, possibly, to leave conflict resolution open. The emphasis is then not on producing a solution but on the process itself.

By Antonin Counillon

Bitcoin crash: cybercrime and over-consumption of electricity, the hidden face of cryptocurrency

Donia Trabelsi, Institut Mines-Télécom Business School ; Michel Berne, Institut Mines-Télécom Business School et Sondes Mbarek, Institut Mines-Télécom Business School

Wednesday 19 May will be remembered as the day of a major cryptocurrency crash: -20% for  dogecoin, -19% for ethereum, -22% for definity, the supposedly-infinite blockchain that was recently launched with a bang. The best-known of these currencies, bitcoin, limited the damage to 8.5% (US $39,587) after being down by as much as 30% over the course of the day. It is already down 39% from its record value reached in April.

L’attribut alt de cette image est vide, son nom de fichier est Elon-Musk-1.jpg.
Elon Musk has gone from being seen as an idol to a traitor in the cryptocu-rrency market. Commons.wikimedia.org

Very few of the 5,000 cryptocurrencies recorded today have experienced growth. The latest ones to be launched, “FuckElon” and “StopElon”, say a lot about the identity of the individual considered to be responsible for this drop in prices set off over a week ago.

The former idol of the cryptocurrency world and iconic leader of Tesla Motors, Elon Musk, now seems to be seen as a new Judas by these markets. The founders of “StopElon” have even stated that their aim is to drive up the price of their new cryptocurrency in order to buy shares in Tesla and oust its top executive. However, bitcoin’s relatively smaller drop seems to be attributed to its reassuring signals.  

Elon Musk sent shockwaves rippling through the crypto world last week when he announced that it would no longer be possible to pay for his cars in bitcoin, reversing the stance he had taken in March. He even hinted that Tesla may sell all of its bitcoins. As the guest host of the Saturday Night Live comedy show in early May, he had already caused the dogecoin to plummet, though he had appeared on the show to support it, by referring to it as a “hustle” during a sketch.

 

The reason for his change of heart? The fact that it is harmful to the planet, as transactions using this currency require high electricity consumption. “Cryptocurrency is a good idea on many levels and we believe it has a promising future, but this cannot come at great cost to the environment,” stated Musk, who is also the head of the SpaceX space projects.

China also appears to have played a role in Wednesday’s events. As the country is getting ready to launch a digital yuan, its leaders announced that financial institutions would be prohibited from using cryptocurrency. “After Tesla’s about-face, China twisted the knife by declaring that virtual currencies should not and cannot be used in the market because they are not real currencies,” declared Fawad Razaqzada, analayst at Thinkmarkets, to AFP yesterday.

While a single man’s impact on the price of these assets – which have seen a dramatic rise over the course of a year – may be questioned, his recent moves and about-face urge us to at least examine the ethical issues they raise. Our research has shown that there at least two categories of issues.

The darknet and ransomware

The ethical issues surrounding cryptocurrencies remain closely related to the nature and very functioning of these assets. Virtual currencies are not associated with any governmental authority or institution. The bitcoin system was even specifically designed to avoid relying on conventional trusted intermediaries, such as banks, and escape the supervision of central banks. The value of a virtual currency therefore relies entirely, in theory, on the trust and honesty of its users, and on the security of an algorithm that can track all of the transactions.

Yet, due to their anonymity, lack of strict regulation and gaps in infrastructure, cryptocurrencies also appear to be likely to attract groups of individuals who seek to use them in a fraudulent way. Regulatory concerns focus on their use in illegal trade (drugs, hacking and theft, illegal pornography), cyberattacks and their potential for funding terrorism, laundering money and evading taxes.

Illegal activities accounted for no less than 46% of bitcoin transactions from 2009 to 2017, amounting to US $76 billion per year over this period, which is equivalent to the scale of US and European markets for illegal drugs. In April 2017, approximately 27 million bitcoin market participants were using bitcoin primarily for illegal purposes.

One of the best-known examples of cybercrime involving cryptocurrency is still the “Silk Road.”  In this online black marketplace dedicated to selling drugs on the darknet, the part of the internet that can only be accessed with specific protocols, payments are made exclusively in cryptocurrencies.  

In 2014, at a time when the price of the bitcoin was around US $150, the FBI’s seizure of over US $4 million in bitcoins on the Silk Road gives an idea of the magnitude of the problem facing regulators. At the time, the FBI estimated that this sum accounted for nearly 5% of the total bitcoin economy.

Cryptocurrencies have also facilitated the spread of attacks using ransomware, malware that blocks companies’ access to their own data, and will only unblock it in exchange for a cryptocurrency ransom payment. A study carried out by researchers at Google revealed that victims paid over US $25 million in ransom between 2015 and 2016. In France, according to a Senate report submitted in July 2020, such ransomware attacks represent 8% of requests for assistance from professionals on the cybermalveillance.gouv.fr website and 3% of requests from private individuals.

Energy-intensive assets

The main cryptocurrencies use a large quantity of electricity for mining, meaning IT operations in order to make them and verify transactions. The two main virtual currencies, bitcoin and ethereum, require complicated calculations that are extremely energy-intensive.

According to Digiconomist, for bitcoin, the peak energy consumption was between 60 and 73 TWh in October 2018. On an annualized basis, in mid-April 2021, this figure is somewhere between 50 and 120 TWh, which is higher than the energy consumption of a country such as Kazakhstan. These figures are even more staggering when they are given per transaction: on 6 May 2019, the figure was 432 KWh per transaction and over 1,000 KWh in mid-April 2021, which is equivalent to the annual consumption of a 30m2 studio apartment in France.

A comparison is often made with the Visa electronic payment system, which requires roughly 300,000 less energy consumption than bitcoin for each transaction. The figures cannot be strictly compared, but clearly show that bitcoin transactions are extremely energy-intensive compared to routine electronic transactions.

How can we find a balance?

There are solutions to reduce the cost and energy impact of bitcoins, such as using green energy or increasing the energy efficiency of mining computers.

However, computer technology must still be improved to make this possible. Most importantly, the miners’ reward for mining new bitcoins and verifying transactions is expected to decrease in the future, forcing them to consume more energy to ensure the same level of income.

The initiators of this technology consider that the innovation offered by bitcoin promotes a free world market and connects the world financially. However, it remains a challenge to find the right balance between promoting an innovative technology and deterring the crime and reducing the ecological impact associated with it.

Donia Trabelsi, associate professor of finance, Institut Mines-Télécom Business SchoolMichel Berne, Economist, director of training (retired), Institut Mines-Télécom Business School and Sondes Mbarek, associate professor of finance, Institut Mines-Télécom Business School

This article was republished from the The Conversation under a Creative Commons license. Read the original article (in French).