Natacha Gondran

Mines Saint-Étienne | Circular economy, Ecodesign, Ecological transition, LCA

[toggle title=”Find all her articles on I’MTech” state=”open]

[/toggle]

économie circulaire, impact environnemental

Economics – dive in, there is so much to discover!

To effectively roll out circular economy policies within a territory, companies and decision-makers require access to evaluation and simulation tools. The design of these tools, still in the research phase, necessarily requires a more detailed consideration of the impact of human activities, both locally and globally.

The circular economy enables optimization of the available resources in order to preserve them and reduce pressure on the environment,” explains Valérie Laforest,1 a researcher at Mines Saint-Étienne. Awareness of the need to protect the planet began to develop in earnest in the 1990s and was gradually accompanied by the introduction of various key regulations. For example, the 1996 IPPC (Integrated Pollution Prevention and Control) Directive, which Valérie Laforest helped to implement through her research, aims to prevent and reduce the different types of pollutant emissions. More recently, legislation such as the French Law on Energy Transition for Green Growth (2015) and the Anti-Waste Law for a Circular Economy (2021) have reflected the growing desire to take the environment into account when considering anthropic activities. However, to enable industries to adapt to these regulations, it is essential for them to have access to tools derived from in-depth research on the impacts of their activities.

Decision-support tools for actors

To enable actors to comply with the regulations and reduce their impacts on the environment, they need to be provided with tools adapted to issues that are both global and local. Part of the research on the circular economy therefore concerns the development of such tools. The aim is to design models that are precise enough to be able to characterize and evaluate a system on the scale of an individual territory, while also being general enough to be adapted to territories with other characteristics. Fairly general methodological frameworks can therefore be developed, within which it is possible to determine criteria and indicators specific to certain cases or sectors. These tools should provide decision-makers with the information they need to implement their infrastructures.

At Mines Saint-Étienne and in collaboration with Macéo, a team of researchers is focusing on the development of a tool called ADALIE, which aims to characterize the potential of territories. This tool creates maps of different geographical areas showing different criteria, such as the economic or environmental criteria of these territories, as well as the industries established in them and their impacts. Decision-makers can therefore use this mapping tool as the basis for choosing their priority activity areas. “The underlying issue is about being able to ensure that a territory possesses the dimensions required to implement circular economy strategies, and that they are successful,” Valerie Laforest tells us. In its next phase, the ADALIE program then aims to archive experiences of effective territorial practices in order to create databases.

For each territorial study, the research provides a huge volume of different types of information. This data generates models that can then be tested in other territories, which also enables the robustness of the models to be checked according to the chosen indicators. These types of tools help local stakeholders to make decisions on aspects of industrial and territorial economics. “This facilitates reflection on how to develop strategies that bring together several actors affected by different issues and problems within a given territory,” states Valérie Laforest. To this end, it is essential to have access to methodologies that enable the measurement of the different environmental impacts. Two main methods are available.

Measurements of impact on the circular economy

Life cycle analysis (LCA) aims to estimate environmental impacts spanning a large geographical and temporal scale, taking account of issues such as distance transported. LCA seeks to model all potential consumptions and emissions over the entire life span of a system. The models are developed by compiling data from other systems and can be used to compare different scenarios in order to determine the scenario that is likely to have the least impact.

Read more on I’MTech: What is life cycle analysis?

The other approach is the best available techniques (BAT) method. This practice was implemented under the European Industrial Emissions Directive (IPPC then IED) in 1996. It aims to help European companies achieve performance standards equivalent to benchmark values for their consumption and emission flows. These benchmarks are based on data from samples of European companies. The granting or refusal of an operating license depends on the comparison of their performance with the reference sample. BATs are therefore based on European standards and have a regulatory purpose.

BATs are related to companies’ performance in the use phase, i.e. the performance of techniques is closely scrutinized in relation to incoming and outgoing flows during the use phase. LCA, on the other hand, is based on real or modeled data including information from upstream and downstream of this use phase. The BAT and LCA approaches are therefore complementary and not exclusive. For example, between two BAT analyses of a system to ensure its compliance with the regulations, different models of the systems could be created by conducting LCAs in order to determine the technique that has the least impact throughout its entire life cycle.

Planetary boundaries

In addition to quantifying the flows generated by companies, impact measurements must also include the effects of these flows on the environment on a global scale.

To this end, research and practices also focus on the effects of activities in relation to the different planetary boundaries. These boundary levels reflect the capacity of the planet to absorb impacts, beyond which they are considered to have irreversible effects.

The work of Natacha Gondran1 at Mines Saint-Étienne is contributing to the development of methods for assessing absolute environmental sustainability, based on planetary boundaries. “We work on the basis of global limitations, defined in the literature, which correspond to categories of impacts that are subject to thresholds at the global level. If humanity exceeds these thresholds, the conditions of life on Earth will become less stable than they are today. We are trying to implement this in impact assessment tools on the scale of systems such as companies,” she explains. These impacts, such as greenhouse gas emissions, land use, and the eutrophication of water, are not directly visible. They must therefore be represented in order to identify the actions to be taken to reduce them.

Read more on I’MTech: Circular economy, environmental assessment and environmental budgeting

Planetary boundaries are defined at the global level by a community of scientists. Modeling tools enable these boundaries to be used to define ecological budgets that correspond, in a manner of speaking, to the maximum quantity of pollutants that can be emitted without exceeding these global limits. The next challenge is then to design different methods to allocate these planetary budgets to territories or production systems. This makes it possible to estimate the impact of industries or territories in relation to planetary boundaries. “Today, many industries are already exceeding these boundary levels, such as the agri-food industry associated with meat. The challenge is to find local systems that can act as alternatives to these circuits in order to drop below the boundary levels,” explains the researcher. For example, it would be wise to locate livestock production closer to truck farming sites, as livestock effluents could then be used as fertilizer for truck farming products. This could reduce the overall impact of the different agri-food chains on the nitrogen and phosphorus cycles, as well as the impact of transport-related emissions, while improving waste management at the territorial level.

Together, these different tools provide an increasingly extensive methodological framework for ensuring the compatibility of human activities with the conservation of ecosystems.

1 Valérie Laforest and Natacha Gondran carry out their research in the framework of the Environment, City and Society Laboratory, a joint CNRS research unit composed of 7 members including Mines Saint-Étienne.

Antonin Counillon

This article is part of a 2-part mini-series on the circular economy.
Read more:

soins, care

Hospitals put to the test by shocks

Benoît Demil, I-site Université Lille Nord Europe (ULNE) and Geoffrey Leuridan, IMT Atlantique – Institut Mines-Télécom

The Covid-19 crisis has put lasting strain on the health care system, in France and around the world. Hospital staff have had to deal with increasing numbers of patients, often in challenging conditions in terms of equipment and space: a shortage of masks and protective equipment initially, then a lack of respirators and anesthetics, and more recently, overloaded intensive care units.

Adding to these difficulties, logistical problems have exacerbated shortage problems. Under these extreme conditions, and despite all the difficulties, the hospital system has withstood and absorbed the shock of the crisis. “The hospital system did not crack under pressure,” as stated by Étienne Minvielle and Hervé Dumez, co-authors of a report on the French hospital management system during the Covid-19 crisis.

While it is unclear how long such a feat can be maintained, and at what price, we may also ask questions about the resilience and reliability of the health care system. In other words, how can care capacity be maintained at a constant quality when the organization is under extreme pressure?

We sought to understand this in a study conducted over 14 months during a non-Covid period, with the staff of a critical care unit of a university hospital center.

High reliability organizations

The concepts of resilience and reliability, which have become buzzwords in the current crisis, have been studied extensively for over 30 years in organizational science research  – more particularly those focusing on High Reliability Organizations (HRO).

This research has offered insights into the mechanisms and factors that enable complex sociotechnical systems to maintain safety and a constant quality of service, although the risk of failure remains possible, with serious consequences.

The typical example of an HRO is an aircraft carrier. We know that deference to expertise and skills within a working group, permanent learning routines and training explain how it can ensure its primary mission over time. But much less is known about how the parties involved manage the resources required for their work, and how this management affects resilience and reliability.

Two kinds of situations

In a critical care unit, activity is continuous but irregular, both quantitatively and qualitatively. Some days are uneventful, with a low number of patients, common disorders and diseases, and care that does not present any particular difficulties. The risks of the patients’ health deteriorating are of course still present, but remain under control. This is the most frequently-observed context: 80 of the 92 intervention situations recorded and analyzed in our research relate to such a context.

At times, however, activity is significantly disrupted by a sudden influx of patients (for example, following a serious automobile accident), or by a rapid and sudden change in a patient’s condition. The tension becomes palpable within the unit, movements are quicker and more precise, conversations between health care workers are brief and focused on what is happening.

In both cases, observations show differentiated management of resources, whether human, technical or relating to space. To understand these differences, we must draw on a concept that has long existed in organizational theory: organizational slack, which was brought to light in 1963 by Richard Cyert and James March.

Slack for shocks

This important concept in the study of organizations refers to excess resources in relation to optimal operations. Organizations or their members accumulate this slack to handle multiple demands, which may be competing at times.

The life of organizations offers a multitude of opportunities for producing and using slack. Examples include the financial reserves a company keeps on hand “just in case”, the safety stock a production manager builds up, the redundancy of certain functions or suppliers, the few extra days allowed for a project, oversized budgets negotiated by a manager to meet his year-end targets etc. All of these practices, which are quite common in organizations, contribute to resilience in two ways.

First, they make it possible to avoid unpredictable shocks, such as the default of a subcontractor, an employee being out on sick leave,  an unforeseen event that affects a project or a machine breaking down. Moreover, in risk situations, they prevent the disruption of the sociotechnical system by maintaining it in a non-degraded environment.

Second, these practices absorb the adverse effects of shocks when they arise unexpectedly – whether due to a strike or the sudden arrival of patients in an emergency unit.

How do hospitals create slack?

Let us first note that in a critical care unit, the staff produces and uses slack all the time. It comes from negotiations that the head of the department has with the hospital administration to obtain and defend the spaces and staff required for the unit to operate as effectively as possible. These negotiations are far from the everyday care activity, but are crucial for the organization to run effectively.

At the operational level, health care workers also free up resources quickly, in particular in terms of available beds, to accommodate new patients who arrive unexpectedly.  The system for managing the order of priority for patients and their transfer is a method commonly used to ensure that there is always an excess of available resources.

In most cases, these practices of negotiation and rapid rotation of resources make it possible for the unit to handle situations that arise during its activity. At times, however, due to the very nature of the activity, such practices may not suffice. How do health care workers manage in such situations?

Constant juggling

Our observations show that other practices offset the temporary lack of resources.

Examples include calling in the unit’s day staff as well as night staff, or others from outside the unit to “lend a hand”, reconfiguring the space to create an additional bed with the necessary technical equipment or negotiating a rapid transfer of patients to other departments.  

This constant juggling allows health care workers to handle emergency situations that may otherwise overwhelm them and put patients lives in danger. For them, the goal is to make the best use of the resources available, but also to produce them locally and temporarily when required by emergency situations.

Are all costs allowed?

The existence of slack poses a fundamental problem for organizations – in particular those whose activity requires them to be resilient to ensure a high degree of reliability. Keeping unutilized resources on hand “just in case” goes against a managerial approach that seeks to optimize the use of resources, whether human, financial or equipment  – as called for by New Public Management since the 1980s, in an effort to lower the costs of public services.

This approach has had a clear impact on the health care system, and in particular on the French hospital system over the past two decades, as the recent example of problems with strategic stocks of masks at the beginning of the Covid pandemic unfortunately illustrated .

Beyond the hospital, military experts have recently made the same observation, noting that “economic concerns in terms of defense, meaning efficiency, are a very recent idea,” which “conflicts with the military notions of ‘reserve,’ ‘redundancy’ and ‘escalation of force,’ which are essential to operational effectiveness and to what is now referred to as resilience.”

Of course, this quest for optimization does not only apply to public organizations. But it often goes hand in hand with greater vulnerability of the sociotechnical systems involved. In any case, this was observed during the health crisis, in light of the optimization implemented at the global level to reduce costs in companies’ supply chains. 

To understand this, one only needs to look at the recent stranding of the Ever Given. Blocked for a week in the Suez Canal, this giant container paralyzed 10% of global trade for a week. What lessons can be learned  from this?

A phenomenon made invisible in emergencies

First of all, it is important for organizations aiming for high reliability to keep in mind that maintaining slack has a cost, and that that they must therefore identify the systems or sub-systems for which resilience must absolutely be ensured.  The difference between slack that means wasting resources and slack that allows for resilience is a very fine line.

Bearing this cost calls for education efforts, since it must not only be fully agreed to by all of the stakeholders, but also justified and defended.

Lastly, the study we conducted in a critical care unit showed that while slack is produced in part during action, it disappears once a situation has stabilized. 

This phenomenon is therefore largely invisible to managers of hospital facilities. While these micro-practices may not be measured by traditional performance indicators, they nevertheless contribute significantly: this might not be a new lesson, but it is worth repeating to ensure that it is not forgotten.

Benoît Demil, professor of strategic management, I-site Université Lille Nord Europe (ULNE) and Geoffrey Leuridan, research professor, IMT Atlantique – Institut Mines-Télécom

This article has been republished from The Conversation under a Creative Commons license. Read the  original article (in French).

Anuragini Shirish

Institut Mines-Télécom Business School | Management of information systems, Digital innovations, Technology law

Anuragini Shirish is an Associate Professor at Institute Mines-Télécom Business School, France. She is an elected member from her institution for the governance of the LITEM (Laboratoire Innovation Technologies Économie et Management) (EA 7363), a joint research laboratory under the University of Paris-Saclay, France.

Her research focuses on studying the humanistic and instrumental impacts of several socio-technical phenomena in the broad areas of digital work, digital innovation and digital society. Her research has been published in international refereed journals including the European Journal of Information Systems (EJIS), Information Systems Journal (ISJ), Communications of the Association of the Information Systems (CAIS) and International Journal of Information and Management (IJIM). She has also presented her work in several premier IS and management conferences including the International Conference on Information Systems (ICIS), the Academy of Management (AOM), Pacific Asia Conference on Information Systems (PACIS), and the Americas Conference on Information Systems (AMCIS), among others. She has been honoured with several awards including the “Outstanding Educator Award” by the Association for Information Systems (AIS) women’s network and the second prize at the Sphinx best thesis award.

[toggle title=”Find her articles on I’MTech” state=”open”]

[/toggle]

Fake News

A real way to look at fake news

The SARS-CoV2 virus is not the only thing that has spread during the Covid-19 pandemic: fake news has also made its way around the world. Although it existed even before, the unprecedented crisis has paved the way for an explosion of fake news. Anuragini Shirish, a researcher at Institut Mines Télécom Business School, explains the factors at play in this trend and how it could be limited in the future.

Why has the pandemic been conducive to the emergence of fake news?

Anuragini Shirish: At the individual level, fear and uncertainty are psychological factors that have played an important role. People have several fears that pertain to safety of their lives and that of their families, jobs, resources leading to unexplained uncertainty about both the present and the future. As a response to this situation, people try to make sense of the situation and understand what’s going to happen to reassure themselves, from both the health  and economic point of views. To do so, they look for information, regardless of how truthful it is.

How do individuals seek guidance in an unpredictable situation?

AS: The main sources of guidance are institutional resources. One of the important resources is the freedom of the media. In countries like India, the media can be influenced by politicians and people tend not to trust it entirely. In Nordic countries, on the other hand, the media focuses on being as objective as possible and people are taught to adhere to objectivity. When trust in the traditional media is low, as may be the case in France, individuals tend to seek out alternative sources of information. Freedom of the media is therefore an institutional resource: if people have confidence in the strength and impartiality of their media, it tends to lower their level of fear and uncertainty.

Another important resource is the government’s measures to increase economic freedom perceptions. If individuals believe that the government can maintain job security and/or their sources of income throughout the pandemic, including periods of lockdown, this also helps reduce their fear and uncertainty. In countries such as Brazil, India and the United States, this has not been the case.

Lastly, there is the freedom of political expression, which gives individuals the opportunity to express and share their doubts publicly.  But in this case, it tends to foster the emergence of fake news. This is one of the findings of a study we conducted with Shirish Srivastava and Shalini Chandra from HEC Paris and the SP Jain School of Global Management.

How is the lack of confidence in institutions conducive to the emergence and dissemination of fake news?

AS : When people trust institutions, they are less likely to seek information from alternative sources. Conversely, when there is a low level of trust in institutions, people tend to react by seeking out all kinds of information on the internet.

Why and how has fake news spread to such an extent?

AS: In order to verify the information they obtain, people tend to share it with their acquaintances and close friends to get their feedback about the validity of the information. And due to their cognitive biases, people tend to consume and share ideas and beliefs they like, even when they’re aware that the information may be false. Fake news are generally structured to evoke a variety of emotions, leading to strong feelings such as anger, fear, sadness, which also helps it to spread more easily than information presented in a more rational or neutral way. 

Each country has its own characteristics when it comes to the emergence and dissemination of fake news, which also explains why an understanding of institutional resources is helpful to identify the factors that can explain the national level differences at play. The emergence  and dissemination of fake news vary widely from country to country: the inhabitants of a country are far more concerned about what’s happening in their own country. Fake news is therefore highly context-specific.

Where is most fake news found?

AS: The majority of fake news is found on social media. That’s where it spreads the quickest since it is extremely easy to share. Social media algorithms will also display the information that people like the most, therefore increasing their cognitive biases and their desire to share this information. And social media is the number-one media consumed by individuals, due to its ease of mobile access and connectivity available at several countries in the world.

Who creates fake news?

AS: It’s hard to understand the deeper motivations of each individual who creates fake news, since they don’t typically brag about it! Some may do so for economic reasons, by generating “clicks” and the revenue that comes with them. Almost half of fake news is generated for political reasons, to destabilize opposing parties. And sometimes it comes directly from political parties. Uncertain situations like pandemics polarize individuals in society, which facilitates this process. And then there are individuals who may just want to create general confusion, for no apparent economic or political motives.

How can we as individuals contribute to limiting the spread of fake news?

AS: When we aren’t sure about the validity of information, we must not act on it, or share it with others before finding out more. It’s a human tendency to try to verify the legitimacy of information by sharing it, but that’s a bad strategy at a larger scale.  

How can we tell if information may be false?

AS: : First of all, we must learn to think critically and not accept everything we see. We must critically examine the source or website that has posted the information and ask why. There is an especially high level of critical thinking in countries such as Finland or the Netherlands, since these skills are taught at high schools and universities, in particular through media studies classes. But in countries where people are not taught to think critically to the same extent, and trust in the media is low, paradoxically, people are more critical of information that comes from the institutional media than of that which comes from social media. Tools like Disinformation Index or Factcheck.org may be used to verify sources in order to check whether or not information is authentic.  

Is fake news dangerous?

AS: It depends on the news. During the pandemic, certain light-hearted fake news was spread. It didn’t help people solve their problems, but it provided entertainment for those who needed it. For example, there was a tweet that appeared in March 2020 saying that a group of elephants in the Yunnan province in China, had drunk corn wine and fallen asleep, amid the recommendations for social distancing.  This tweet was shared 264,000 times and got 915,500 likes and 5,000 comments. This tweet was later “debunked” (proven to be false) in an article that appeared in National Geographic. This kind of fake news does not have any harmful consequences.  

But other kinds of fake news have had far more serious consequences. First, political fake news generally reduces trust in institutional resources.  It doesn’t offer any solutions and creates more confusion. Paradoxically, this increases fear and uncertainty in individuals and facilitates the dissemination of more fake news, creating a vicious circle! Since it reduces institutional trust, government programs have less of an impact, which also has economic implications. During the pandemic, this has had a major impact on health. Not only because the vaccine campaigns have had less of an effect, but because people self-medicated  based on fake news and died as a result. People’s mental health has also suffered through prolonged exposure to uncertainty, at times leading to mental illness or even suicide. This is also why the term “infodemic” has appeared. 

Is social media trying to fight the spread of fake news?  

AS: During the pandemic, content regulation by the platforms has increased, in particular through  UN injunctions and the gradual implementation of the Digital Service Act. For example, Twitter, Facebook and Instagram are trying to provide tools to inform their users which information may be inauthentic.  The platforms were not prepared for this kind of regulation, and they generated a lot of revenue from the large volume of information being shared, whether or not it was true.  This is changing – let’s hope that this continues over time!

Read more on I’MTech: Digital Service Act: Regulating the content of digital platforms Act 1

What are the levels of institutional control over fake news?

AS: Control over information must be carried out through various approaches since it affects many aspects of society. The government can increase its presence in the media and social media, and improve internet security. There are two ways of doing this: through the law, by punishing the perpetrator of fake news, but also by increasing collective awareness and providing programs to teach people how to verify information. It’s important to put this aspect in place ahead of time, in order to anticipate potential crises that may occur in the future and to monitor collective awareness levels . However, the goal is not to control the freedom of media, on the contrary,  this freedom increases the contribution of independent media, and signals to the citizens that the government seeks to be impartial.

How can we improve people’s relationship with information and institutions in general?

AS: Individuals’ behavior is difficult to change in the long term: new regulations are ultimately violated when people see them as meaningless. So, we must also help citizens find value in the rules of society that may be put in place by the government, in order for them to adhere to them.

By Antonin Counillon

Easier access to research infrastructure for the European atmospheric science community

Improving access to large facilities for research on climate and air quality and optimizing use are the objectives of the European ATMO-ACCESS project. Véronique Riffault and Stéphane Sauvage, researchers at IMT Nord Europe, one of the project’s 38 partner institutions, explain the issues involved.

What was the context for developing the ATMO-ACCESS project?

Stéphane Sauvage – The ATMO-ACCESS project responds to a H2020-INFRAIA call for pilot projects specifically opened for certain research infrastructure (RI) targeted by the call, to facilitate access for a wide community of users and develop innovative access services that are harmonized at the European level.  

IMT Nord Europe’s participation in this project is connected to its significant involvement in the ACTRIS (Aerosol, Clouds, and Trace Gases Research InfraStructure) RI. ACTRIS is a distributed RI bringing together laboratories of excellence and observation and exploration platforms, to support research on climate and air quality. It helps improve understanding of past, present and future changes in atmospheric composition and the physico-chemical processes that contribute to regional climate variability

What is the goal of ATMO-ACCESS?

S.S. – ATMO-ACCESS is intended for the extended atmospheric science community. It involves three RI: ACTRISICOS and IAGOS, combining stationary and mobile observation and exploration platforms, calibration centers and data centers. It’s a pilot project aimed at developing a new model of integrating activities for this infrastructure, in particular by providing a series of recommendations for harmonized, innovative access procedures to help establish a sustainable overall framework .

What resources will be used to reach this goal?

S.S. – The project has received €15 million in funding , including €100 K for IMT Nord Europe where four research professors and a research engineer are involved. ATMO-ACCESS will provide scientific and industrial users with physical and remote access to 43 operational European atmospheric research facilities, including ground observation stations and simulation chambers as well as mobile facilities and calibration centers which are essential components of RI.

Why is it important to provide sustainable access to research facilities in the field of atmospheric science?

Véronique Riffault – The goal  is to optimize the use of large research facilities, pool efforts and avoid duplication for streamlining and environmental transition purposes, while promoting scientific excellence and maintaining a high level in the transfer of knowledge and expertise, international collaborations, training for young scientists and the contribution of RI to innovative technologies and economic development.

What role do IMT Nord Europe researchers play in this consortium?

V.R. – IMT Nord Europe researchers are responsible for developing virtual training tools for the users of these research facilities and their products. Within this scientific community, IMT Nord Europe has recognized expertise in developing innovative learning resources (Massive Open Online Course-MOOC, serious games), based on the resources the school has already created in collaboration with its Educational Engineering center, in particular a first MOOC in English on the causes and impacts of air pollution, and a serious game, which should be incorporated into a second module of this MOOC currently in development.

As part of ATMO-ACCESS, a pilot SPOC (Small Private Online Course) will present the benefits and issues related to this infrastructure and a serious game will apply the data proposed by observatories and stored in data centers, while video tutorials for certain instruments or methodologies will help disseminate good practices.

Who are your partners and how will you collaborate scientifically?

V.R. – The project is coordinated by CNRS and brings together 38 partner institutions from 19 European countries. We’ll be working with scientific colleagues from a variety of backgrounds: calibration centers responsible for ensuring measurement quality, data centers for the technical development of resources,  and of course, the community as a whole to best respond to expectations and  engage in a continuous improvement process. In addition to the academic world, other users will be able to benefit from the tools developed through the ATMO-ACCESS project: major international stakeholders and public authorities (ESA, EEA, EUMETSAT, EPA, governments, etc.) as well as the private sector.

The project launch meeting has just been held. What are the next important steps?

V.R. – That’s right, the project was launched in mid-May. The first meeting for the working group in which IMT Nord Europe is primarily involved is scheduled for after the summer break. Our first deliverable will be the interdisciplinary SPOC for atmospheric science, planned for less than two years from now. The project will also launch its first call for access to RI intended for atmosphere communities and beyond.

Interview by Véronique Charlet

Also read on I’MTech

fission spin, nucléaire

Nuclear fission reveals new secrets

Almost 80 years after the discovery of nuclear fission, it continues to unveil its mysteries. The latest to date: an international collaboration has discovered what makes the fragments of nuclei spin after fission. This offers insights into how atom nuclei work and into improving our future nuclear power plants.

Take the nuclei of uranium-238 (the ones used in nuclear power plants), bombard them with neutrons, and watch how they break down into two nuclei of different sizes. Or, more precisely, observe how these fragments spin. This is, in short, the experiment conducted by researchers from 37 institutes in 16 countries, led by the Irène Joliot-Curie Laboratory in Orsay, in the Essonne department. Their findings, which offer insights into nuclear fission, have been published in the journal Nature. Several French teams took part in this discovery.  

The mystery of spinning nuclei

But why is there a need to conduct this kind of experiment? Don’t we understand fission perfectly, since the phenomenon was discovered in the late 1930s by German chemists Otto Hahn and Fritz Strassmann, and Austrian physicist Lise Meitner? Aren’t there hundreds of nuclear fission reactors around the world, that allow us to understand everything? In a word – no. Some mysteries still remain, and among them is the spin of nucleus fragments.  The spin is the equivalent, in the quantum world, of angular momentum. This is more or less how the nucleus spins like a top.

Even when the original nucleus is not spinning, the nuclei resulting from fission still spin. How do they acquire this angular momentum? What generates this rotation? Up to now, there had been two competing hypotheses. The first, supported by the majority of physicists, was that this spin is created before fission. In this case, there must be a correlation between the spins of the two fragments. The second was that the spin of the fragments is caused after fission, and that these spins are therefore independent of each other. The findings by the 37 teams are decisive: the second hypothesis is correct.

184 detectors and 1,200 hours of radiation

We have to think of the nucleus like a liquid drop,” explains Muriel Fallot, a researcher at Subatech (a joint laboratory affiliated to IMT Atlantique, CNRS and University of Nantes), who took part in the experiment. “When it is struck by the neutron, it splits and each fragment is deformed, like a drop if it received an impact. It is when the fragment attempts to return to its spherical shape to acquire greater stability that the energy released is converted into heat and rotational energy.”

To achieve these results, the teams irradiated not only uranium-238, but also thorium-232, two nuclei that can split when they collide with a neutron (this is referred to as fissile nuclei). And this was carried out over 1,200 hours, between February and June 2018. These fragments dissipate the energy accumulated in the form of gamma radiation.  This is detected using 184 detectors placed around the bombarded nuclei.  Yet, depending on the fragments’ spin, the photons do not arrive at the same angle. An analysis of the radiation therefore makes it possible to trace the fragments’ spin. These experiments were conducted at the ALTO accelerator located in Orsay.  

Better understanding the strong interaction

These findings, which offer important insights into the fundamental physics of nuclear fission, will now be analyzed by theoretical physicists from around the world. Certain theoretical models will have to be abandoned, while others will incorporate this data to explain fission quantitatively. They should physicists to better predict the stability of radioactive nuclei.

Today, we are able to predict the lifetime of some heavy nuclei, but not all of them,” says Muriel Fallot. “The more unstable they are, the less we are able to predict them. This research will help us better understand the strong interaction, that which binds the protons and neutrons within the nuclei. Because this strong interaction depends on the spin.”

Applications for reactors of the future

This new knowledge will help researchers working on producing nuclei that are “exotic,”  very heavy,  or with a large excess of protons compared to neutrons (or the reverse). Will these findings lead to the production of new, even heavier nuclei? They would provide food for thought for theorists to further understand nuclear interactions within nuclei.

In addition to being of interest at the fundamental level, these findings have important applications for the nuclear industry.  In a nuclear power plant, a nucleus obtained from fission and which “spins quickly” gives off a lot of energy in the form of gamma radiation.  This can damage certain materials such as fuel sheaths. Yet, “We don’t know how to accurately predict this energy dissipation. There is up to a 30% gap between the calculations and the experiments,” says Muriel Fallot. “That has an impact on the design of these materials.”  While current reactors are managed well based on the experience acquired, these findings will be especially useful for more innovative future reactors.

Cécile Michaut

values conception, collective design, valeurs

Learning to incorporate values in collective design

Designing projects implies that individuals or groups must pool their values to collaborate effectively. But the various parties involved may be guided by diverging value systems, making it difficult to find compromises and common solutions. Françoise Détienne and Michael Baker, researchers at Télécom Paris, explain how the role of values in collective design can be understood.

How is a value defined in the field of collective design?

Françoise Détienne: In general, the concept of values refers to principles or beliefs that guide individuals’ actions and choices. Put that way, any preference might be seen as a value, so we must limit the definition to the ethical dimension in choices, connected to social and human aspects. The notions of inclusion or privacy protection are examples of these kinds of values.

Michael Baker: Certain notions may be considered absolute values in broad terms – like freedom for example – but they can be divided into different nuances, such as freedom of expression or freedom of choice.  And some terms or expressions are subject to implicit value judgments. For example, the word “business” may, in certain contexts, express a negative value judgment, although it refers to something neutral from a values perspective. In order to identify the underlying values in interactions produced in collective design situations, we must therefore go beyond language by taking into account the context in which statements are made.

How can we understand the role of values in the design process?

FD: Most of the current approaches are based on the concepts of Value Sensitive Design (VSD), which consider values to be discrete and independent criteria that must simply be added to the other types of design criteria.  Most of the time, however, individual and collective values are organized into systems that we refer to as ideologies. Here, ideologies mean the set of values underlying individual and collective  viewpoints. We have proposed a new approach called Ideologically Embedded Design (IED), which differentiates between several levels at which values (systems) operate: the form of participation and its underlying principles, the evolution of the design and decision-making process, the group or community involved in the process and its production. This approach also emphasizes the interactions and the possible co-evolution between these levels.

How has the understanding of the role of values in design evolved?

MB : Up to now, values in design have been analyzed based on the objects or physical infrastructure resulting from projects, which reflect certain political and social choices. The analyses carried out based on these objects allowed us to extract values through an ex-post deconstruction. But the current design ergonomics movement seeks instead to analyze how values come into play in the design process and how to deal with value conflicts.

What are some organizations where thinking about values in advance is a priority?

FD: In general, the design of collaborative organizations is rooted in strong values. Participatory housing, which aims to implement shared governance systems, is a good example. The considerations of the individuals involved focus primarily on how they must be organized, based on values that are in line with sharing, such as respect, tolerance and equity in decision-making. In communities like these, the stakes of such values are high, since the goal is to live together successfully.

MB: Many online communities give significant thought to values. One of the best examples is Wikipedia. The Wikipedia community is based on values such as open access to knowledge, free participation of contributors, and neutrality of point of view. Should disagreements rooted in opposing value systems arise, there is not any real way to “resolve” the conflict. In this case, to represent the diversity of viewpoints, the conflict may be handled by dividing the text into different sections, each of which reflects a different viewpoint. For example, an article on “Freud” may be divided into sections that represent the topic from the viewpoint of behavioral psychologists, neuropsychologists, psychoanalysts etc.

Are there discrepancies at times between the values promoted or upheld by an organization and the way they are applied on a concrete level?

MB: There is, indeed, a disconnect at times between the values advanced by an organization and the way they are actually implemented. For example, the notion of “collaboration” may be put forth as a positive value, with various rhetorical uses. For the last decade or so, this term has had a positive connotation and is sometimes used for image and marketing purposes, along the same lines as  greenwashing.  Research is also being carried out on the possible differences and tensions between an organization’s institutional discourse and how groups actually work within the organization.

Are there conflicting values within the same organization at times?

FD: At a certain level of definition of values, this is often the case.  An important issue is clarity in the definition of values during discussions and debates, since each individual may have a different interpretation. So it’s important to support the co-construction of the meaning of values through dialogue, and identify whether or not there are truly competing values.

MB: In discussions about a design, viewpoints must evolve in order to reach a compromise, but that does not mean that each individual’s ideologies will change drastically over time. Almost by definition, it seems, values are stable and typically change only very slowly (except through a radical “conversion”).  So we must understand each individual’s underlying ideologies and frame discussions about the decision-making process by taking them into account. For example, it’s helpful to set out in advance the ways in which the process is collaborative or participative, and if there must be equitable participation between the various stakeholders. The organizational framework is also very  value-oriented.

What are some concrete methods that can help improve collaboration?

FD: Various methods can be applied to improve the alignment and compromise of values within a group. While approaches such as VSD help identify values, ensuring that debates are constructive is not easy. We propose methods from constructive ergonomics such as role playing, organizational simulation and imagining use situations, as well as reflective methods. For example, self-confrontation techniques can be put in place by filming a working group and then having the group members watch the video. This gives them the opportunity to think in a structured way about the  respective underlying values that guided their collective activity. Visualization tools can also help resolve such debates.

How can conflicts be resolved in the event of disagreements about values?

FD : In order to resolve conflicts that may arise, the use of a debate moderator who has been trained in advance for this role can prove to be very helpful. What are referred to as “avoidance” strategies may also be used, such as momentarily redirecting the discussion toward more practical questions, to avoid crystallizing conflicts and opposing viewpoints.

MB: It’s also important to redirect discussion toward compromises that allow different values to coexist. To do so, it can be helpful to bring the debate back to a level focusing on more general values. Sometimes, the more individuals specify what they mean by a value, the more viewpoints may oppose and lead to conflict. 

FD: And last but not least, this leads us to rethink the timeframe for design activity to allow time for co-construction and evolution —which will in all likelihood be slow— of values, negotiation and, possibly, to leave conflict resolution open. The emphasis is then not on producing a solution but on the process itself.

By Antonin Counillon

Bitcoin crash: cybercrime and over-consumption of electricity, the hidden face of cryptocurrency

Donia Trabelsi, Institut Mines-Télécom Business School ; Michel Berne, Institut Mines-Télécom Business School et Sondes Mbarek, Institut Mines-Télécom Business School

Wednesday 19 May will be remembered as the day of a major cryptocurrency crash: -20% for  dogecoin, -19% for ethereum, -22% for definity, the supposedly-infinite blockchain that was recently launched with a bang. The best-known of these currencies, bitcoin, limited the damage to 8.5% (US $39,587) after being down by as much as 30% over the course of the day. It is already down 39% from its record value reached in April.

L’attribut alt de cette image est vide, son nom de fichier est Elon-Musk-1.jpg.
Elon Musk has gone from being seen as an idol to a traitor in the cryptocu-rrency market. Commons.wikimedia.org

Very few of the 5,000 cryptocurrencies recorded today have experienced growth. The latest ones to be launched, “FuckElon” and “StopElon”, say a lot about the identity of the individual considered to be responsible for this drop in prices set off over a week ago.

The former idol of the cryptocurrency world and iconic leader of Tesla Motors, Elon Musk, now seems to be seen as a new Judas by these markets. The founders of “StopElon” have even stated that their aim is to drive up the price of their new cryptocurrency in order to buy shares in Tesla and oust its top executive. However, bitcoin’s relatively smaller drop seems to be attributed to its reassuring signals.  

Elon Musk sent shockwaves rippling through the crypto world last week when he announced that it would no longer be possible to pay for his cars in bitcoin, reversing the stance he had taken in March. He even hinted that Tesla may sell all of its bitcoins. As the guest host of the Saturday Night Live comedy show in early May, he had already caused the dogecoin to plummet, though he had appeared on the show to support it, by referring to it as a “hustle” during a sketch.

 

The reason for his change of heart? The fact that it is harmful to the planet, as transactions using this currency require high electricity consumption. “Cryptocurrency is a good idea on many levels and we believe it has a promising future, but this cannot come at great cost to the environment,” stated Musk, who is also the head of the SpaceX space projects.

China also appears to have played a role in Wednesday’s events. As the country is getting ready to launch a digital yuan, its leaders announced that financial institutions would be prohibited from using cryptocurrency. “After Tesla’s about-face, China twisted the knife by declaring that virtual currencies should not and cannot be used in the market because they are not real currencies,” declared Fawad Razaqzada, analayst at Thinkmarkets, to AFP yesterday.

While a single man’s impact on the price of these assets – which have seen a dramatic rise over the course of a year – may be questioned, his recent moves and about-face urge us to at least examine the ethical issues they raise. Our research has shown that there at least two categories of issues.

The darknet and ransomware

The ethical issues surrounding cryptocurrencies remain closely related to the nature and very functioning of these assets. Virtual currencies are not associated with any governmental authority or institution. The bitcoin system was even specifically designed to avoid relying on conventional trusted intermediaries, such as banks, and escape the supervision of central banks. The value of a virtual currency therefore relies entirely, in theory, on the trust and honesty of its users, and on the security of an algorithm that can track all of the transactions.

Yet, due to their anonymity, lack of strict regulation and gaps in infrastructure, cryptocurrencies also appear to be likely to attract groups of individuals who seek to use them in a fraudulent way. Regulatory concerns focus on their use in illegal trade (drugs, hacking and theft, illegal pornography), cyberattacks and their potential for funding terrorism, laundering money and evading taxes.

Illegal activities accounted for no less than 46% of bitcoin transactions from 2009 to 2017, amounting to US $76 billion per year over this period, which is equivalent to the scale of US and European markets for illegal drugs. In April 2017, approximately 27 million bitcoin market participants were using bitcoin primarily for illegal purposes.

One of the best-known examples of cybercrime involving cryptocurrency is still the “Silk Road.”  In this online black marketplace dedicated to selling drugs on the darknet, the part of the internet that can only be accessed with specific protocols, payments are made exclusively in cryptocurrencies.  

In 2014, at a time when the price of the bitcoin was around US $150, the FBI’s seizure of over US $4 million in bitcoins on the Silk Road gives an idea of the magnitude of the problem facing regulators. At the time, the FBI estimated that this sum accounted for nearly 5% of the total bitcoin economy.

Cryptocurrencies have also facilitated the spread of attacks using ransomware, malware that blocks companies’ access to their own data, and will only unblock it in exchange for a cryptocurrency ransom payment. A study carried out by researchers at Google revealed that victims paid over US $25 million in ransom between 2015 and 2016. In France, according to a Senate report submitted in July 2020, such ransomware attacks represent 8% of requests for assistance from professionals on the cybermalveillance.gouv.fr website and 3% of requests from private individuals.

Energy-intensive assets

The main cryptocurrencies use a large quantity of electricity for mining, meaning IT operations in order to make them and verify transactions. The two main virtual currencies, bitcoin and ethereum, require complicated calculations that are extremely energy-intensive.

According to Digiconomist, for bitcoin, the peak energy consumption was between 60 and 73 TWh in October 2018. On an annualized basis, in mid-April 2021, this figure is somewhere between 50 and 120 TWh, which is higher than the energy consumption of a country such as Kazakhstan. These figures are even more staggering when they are given per transaction: on 6 May 2019, the figure was 432 KWh per transaction and over 1,000 KWh in mid-April 2021, which is equivalent to the annual consumption of a 30m2 studio apartment in France.

A comparison is often made with the Visa electronic payment system, which requires roughly 300,000 less energy consumption than bitcoin for each transaction. The figures cannot be strictly compared, but clearly show that bitcoin transactions are extremely energy-intensive compared to routine electronic transactions.

How can we find a balance?

There are solutions to reduce the cost and energy impact of bitcoins, such as using green energy or increasing the energy efficiency of mining computers.

However, computer technology must still be improved to make this possible. Most importantly, the miners’ reward for mining new bitcoins and verifying transactions is expected to decrease in the future, forcing them to consume more energy to ensure the same level of income.

The initiators of this technology consider that the innovation offered by bitcoin promotes a free world market and connects the world financially. However, it remains a challenge to find the right balance between promoting an innovative technology and deterring the crime and reducing the ecological impact associated with it.

Donia Trabelsi, associate professor of finance, Institut Mines-Télécom Business SchoolMichel Berne, Economist, director of training (retired), Institut Mines-Télécom Business School and Sondes Mbarek, associate professor of finance, Institut Mines-Télécom Business School

This article was republished from the The Conversation under a Creative Commons license. Read the original article (in French).

Thermiup

ThermiUp: a new heat recovery device

ThermiUP helps meet the challenge of energy-saving in buildings. This start-up, incubated at IMT Atlantique, is set to market a device that transfers heat from grey water to fresh water. Its director, Philippe Barbry, gives us an overview of the system.

What challenges does the start-up ThermiUp help meet?

Philippe Barbry: Saving energy is an important challenge from a societal point of view, but also in terms of regulations. In the building industry, there are increasingly strict thermal regulations. The previous regulations were established in 2012, while the next ones will come into effect in 2022 and will include CO2 emissions related to energy consumption. New buildings must meet current regulations. Our device reduces energy needs for heating domestic water, and therefore helps real estate developers and social housing authorities comply with regulations.

What is the principle behind ThermiUP?

PB: It’s a device that exchanges energy between grey water, meaning little-polluted waste water from domestic use, and fresh water. The exchanger is placed as close as possible to the domestic water outlet so that this water loses a minimum of heat energy. The exchanger connects the water outlet pipe with that of the fresh water supply.

On average, water from a shower is at 37°C and cools down slightly at the outlet: it is around 32°C when it arrives in our device. Cold water is at 14°C on average. Our exchanger preheats it to 25°C. Showers represent approximately 80% of the demand for domestic hot water and the exchanger makes it possible to save a third of the energy required for the total domestic hot water production.

Is grey water heat recovery an important energy issue in the building sector?

PB: Historically, most efforts have focused on heating and insulation for buildings. But great strides have been made in this sector and these aspects now account for only 30% of energy consumption in new housing units. As a result, domestic hot water now accounts for 50% of these buildings’ energy consumption.  

What is the device’s life expectancy?

PB: That’s one of the advantages of our exchanger: its life expectancy is equivalent to that of a building, which is considered to be 50 years. It’s a passive system, which doesn’t require electronics,  moving parts or a motor. It is based simply on the laws of gravity and energy transformation. It can’t break down, which represents a significant advantage for real estate developers. ThermiUP reduces energy demand and can also be compatible with other systems such as solar.  

How does your exchanger work?

PB: It is not a traditional heat plate exchanger, since that would get dirty too quickly. Our research and development was based on other types of exchangers. It is a device made of copper, which is an easily recycled material. We optimized the prototype for exchange and its geometry along with its industrial manufacturing technique for two years at IMT Atlantique. But I can’t say more about that until it becomes available on the market in the next few months.

Do you plan to implement this device in other types of housing than new buildings?

PB: For now, with our device, we only plan to target the new building market which is a big market since there are approximately 250,000 multiple dwelling housing units a year in France. In the future, we’ll work on prototypes for individual houses as well as for the renovation sector.

Learn more about ThermiUp

By Antonin Counillon