Posts

Tatouage des données de santé, health data

Encrypting and watermarking health data to protect it

As medicine and genetics make increasing use of data science and AI, the question of how to protect this sensitive information is becoming increasingly important to all those involved in health. A team from the LaTIM laboratory is working on these issues, with solutions such as encryption and watermarking. It has just been accredited by Inserm.

The original version of this article has been published on the website of IMT Atlantique

Securing medical data

Securing medical data, preventing it from being misused for commercial or malicious purposes, from being distorted or even destroyed has become a major challenge for both health players and public authorities. This is particularly relevant at a time when progress in medicine (and genetics) is increasingly based on the use of huge quantities of data, particularly with the rise of artificial intelligence. Several recent incidents (cyber-attacks, data leaks, etc.) have highlighted the urgent need to act against this type of risk. The issue also concerns each and every one of us: no one wants their medical information to be accessible to everyone.

Health data, which is particularly sensitive, can be sold at a higher price than bank data,” points out Gouenou Coatrieux, a teacher-researcher at LaTIM (the Medical Information Processing Laboratory, shared by IMT Atlantique, the University of Western Brittany (UBO) and Inserm), who is working on this subject in conjunction with Brest University Hospital. To enable this data to be shared while also limiting the risks, LaTIM are usnig two techniques: secure computing and watermarking.

Secure computing, which combines a set of cryptographic techniques for distributed computing along with other approaches, ensures confidentiality: the externalized data is coded in such a way that it is possible to continue to perform calculations on it. The research organisation that receives the data – be it a public laboratory or private company – can study it, but doesn’t have access to its initial version, which it cannot reconstruct. They therefore remain protected.

a

Gouenou Coatrieux, teacher-researcher at LaTIM
(Laboratoire de traitement de l’information médicale, common to IMT Atlantique, Université de Bretagne occidentale (UBO) and Inserm

Discreet but effective tattooing

Tattooing involves introducing a minor and imperceptible modification into medical images or data entrusted to a third party. “We simply modify a few pixels on an image, for example to change the colour a little, a subtle change that makes it possible to code a message,” explains Gouenou Coatrieux. We can thus tattoo the identifier of the last person to access the data. This method does not prevent the file from being used, but if a problem occurs, it makes it very easy to identify the person who leaked it. The tattoo thus guarantees traceability. It also creates a form of dissuasion, because users are informed of this device. This technique has long been used to combat digital video piracy. Encryption and tattooing can also be combined: this is called crypto-tattooing.

Initially, LaTIM team was interested in the protection of medical images. A joint laboratory was thus created with Medecom, a Breton company specialising in this field, which produces software dedicated to radiology.

Multiple fields of application

Subsequently, LaTIM extended its field of research to the entire field of cyber-health. This work has led to the filing of several patents. A former doctoral student and engineer from the school has also founded a company, WaToo, specialising in data tagging. A Cyber Health team at LaTIM, the first in this field, has just been accredited by Inserm. This multidisciplinary team includes researchers, research engineers, doctoral students and post-docs, and includes several fields of application: protection of medical images and genetic data, and ‘big data’ in health. In particular, it works on the databases used for AI and deep learning, and on the security of treatments that use AI. “For all these subjects, we need to be in constant contact with health and genetics specialists,” stresses Gouenou Coatrieux, head of the new entity. We also take into account standards in the field such as DICOM, the international standard for medical imaging, and legal issues such as those relating to privacy rights with the application of European RGPD regulations.

The Cyber Health team recently contributed to a project called PrivGen, selected by the Labex (laboratory of excellence) CominLabs. The ongoing work which started with PrivGen aims to identify markers of certain diseases in a secure manner, by comparing the genomes of patients with those of healthy people, and to analyse some of the patients’ genomes. But the volumes of data and the computing power required to analyse them are so large that they have to be shared and taken out of their original information systems and sent to supercomputers. “This data sharing creates an additional risk of leakage or disclosure,” warns the researcher. “PrivGen’s partners are currently working to find a technical solution to secure the treatments, in particular to prevent patient identification”.

Towards the launch of a chaire (French research consortium)

An industrial chaire called Cybaile, dedicated to cybersecurity for trusted artificial intelligence in health, will also be launched next fall. LaTIM will partner with three other organizations: Thales group, Sophia Genetics and the start-up Aiintense, a specialist in neuroscience data. With the support of Inserm, and with the backing of the Regional Council of Brittany, it will focus in particular on securing the learning of AI models in health, in order to help with decision-making – screening, diagnoses, and treatment advice. “If we have a large amount of data, and therefore representations of the disease, we can use AI to detect signs of anomalies and set up decision support systems,” says Gouenou Coatrieux. “In ophthalmology, for example, we rely on a large quantity of images of the back of the eye to identify or detect pathologies and treat them better.

métavers, metaverse

What is the metaverse?

Although it is only in the prototype stage, the metaverse is already making quite a name for itself. This term, which comes straight out of a science fiction novel from the 1990s, now describes the concept of a connected virtual world, heralded as the future of the Internet. So what’s hiding on the other side of the metaverse? Guillaume Moreau, a Virtual Reality researcher at IMT Atlantique, explains.

How can we define the metaverse?

Guillaume Moreau: The metaverse offers an immersive and interactive experience in a virtual and connected world. Immersion is achieved through the use of technical devices, mainly Virtual Reality headsets, which allow you to feel present in an artificial world. This world can be imaginary, or a more or less faithful copy of reality, depending on whether we’re talking about an adventure video game or the reproduction of a museum, for example. The other key aspect is interaction. The user is a participant, so when they do something, the world around them immediately reacts.

The metaverse is not a revolution, but a democratization of Virtual Reality. Its novelty lies in the commitment of stakeholders like Meta, aka Facebook – a major investor in the concept – to turn experiences that were previously solitary or for small groups only into, massive, multi-user experiences – in other words, to simultaneously interconnect a large number of people in three-dimensional virtual worlds, and to monetize the whole concept. This raises questions of IT infrastructure, uses, ethics, and health.

What are its intended uses?

GM: Meta wants to move all internet services into the metaverse. This is not realistic, because there will be, for example, no point in buying a train ticket in a virtual world. On the other hand, I think there will be not one, but many metaverses, depending on different uses.

One potential use is video games, which are already massively multi-user, but also virtual tourism, concerts, sports events, and e-commerce. A professional use allowing face-to-face meetings is also being considered. What the metaverse will bring to these experiences remains an open question, and there are sure to be many failures out of thousands of attempts. I am sure that we will see the emergence of meaningful uses that we have not yet thought of.

In any case, the metaverse will raise challenges of interoperability, i.e. the possibility of moving seamlessly from one universe to another. This will require the establishment of standards that do not yet exist and that should, as is often the case, be enforced by the largest players on the market.

What technological advances have made the development of these metaverses possible today?

GM: There have been notable material advances in graphics cards that offer significant display capabilities, and Virtual Reality headsets have reached a resolution equivalent to the limits of human eyesight. Combining these two technologies results in a wonderful contradiction.

On the one hand, the headsets work on a compromise; they must offer the largest possible field of view whilst still remaining light, small and energy self-sufficient. On the other hand, graphics cards are heat sinks. Therefore, in order to ensure the battery life of the headsets, the calculations behind the metaverse display have to be done on remote server farms before the images can be transferred. That’s where the 5G networks come in, whose potential for new applications, like the metaverse, is yet to be explored.

Could the metaverse support the development of new technologies that would increase immersion and interactivity?

GM: One way to increase the action of the user is to set them in motion. There is an interesting research topic on the development of multidirectional treadmills. This is a much more complicated problem than it seems, and it only takes the horizontal plane into account – so no slopes, steps, etc.

Otherwise, immersion is mainly achieved through sensory integration, i.e. our ability to feel all our senses at the same time and to detect inconsistencies. Currently, immersion systems only stimulate sight and hearing, but another sense that would be of interest in the metaverse is touch.

However, there are a number of challenges associated with so-called ‘haptic’ devices. Firstly, complex computer calculations must be performed to detect a user’s actions to the nearest millisecond, so that they can be felt without the feedback seeming strange and delayed. Secondly, there are technological challenges. The fantasy of an exoskeleton that responds strongly, quickly, and safely in a virtual world will never work. Beyond a certain level of power, robots must be kept in cages for safety reasons. Furthermore, we currently only know how to do force feedback on one point of the body – not yet on the whole thing.

Does that mean it is not possible to stimulate senses other than sight and hearing?

GM: Ultra-realism is not inevitable; it is possible to cheat and trick the brain by using sensory substitution, i.e. by mixing a little haptics with visual effects. By modifying the visual stimulus, it is possible to make haptic stimuli appear more diverse than they actually are. There is a lot of research to be done on this subject. As far as the other senses are concerned, we don’t know how to do very much. This is not a major problem for a typical audience, but it calls into question the accessibility of virtual worlds for people with disabilities.

One of the questions raised by the metaverse is its health impact. What effects might it have on our health?

GM: We know already that the effects of screens on our health are not insignificant. In 2021, the French National Agency for Food, Environmental and Occupational Health & Safety (ANSES) published a report specifically targeting the health impact of Virtual Reality, which is a crucial part of the metaverse. The prevalence of visual disorders and the risk of Virtual Reality Sickness – a simulation sickness that affects many people – will therefore be sure consequences of exposure to the metaverse.

We also know that virtual worlds can be used to influence people’s behavior. Currently, this has a positive goal and is being used for therapeutic purposes, including the treatment of certain phobias. However, it would be utopian to think that the opposite is not possible. For ethical and logical reasons, we cannot conduct research aiming to demonstrate that the technology can be used to cause harm. It will therefore be the uses that dictate the potentially harmful psychological impact of the metaverse.

Will the metaverses be used to capture more user data?

GM: Yes, that much is obvious. The owners and operators of the metaverse will be able to retrieve information on the direction of your gaze in the headset, or on the distance you have traveled, for example. It is difficult to say how this data will be used at the moment. However, the metaverse is going to make its use more widespread. Currently, each website has data on us, but this information is not linked together. In the metaverse, all this data will be grouped together to form even richer user profiles. This is the other side of the coin, i.e. the exploitation and monetization side. Moreover, given that the business model of an application like Facebook is based on the sale of targeted advertising, the virtual environment that the company wants to develop will certainly feed into a new advertising revolution.

What is missing to make the metaverse a reality?

GM: Technically, all the ingredients are there except perhaps the equipment for individuals. A Virtual Reality headset costs between €300 and €600 – an investment that is not accessible to everyone. There is, however, a plateau in technical improvement that could lower prices. In any case, this is a crucial element in the viability of the metaverse, which, let us not forget, is supposed to be a massively multi-user experience.

Anaïs Culot

NOMA

Better network-sharing with NOMA

The rise in the number of connected devices will lead to increased congestion of frequencies available for data circulation. Non-Orthogonal Multiple Access (NOMA) is one of the techniques currently being studied to improve the hosting capacity of networks and avoid their saturation.

To access the internet, a mobile phone must exchange information with base stations, devices commonly known as relay antennas. These data exchanges operate on frequency sub-bands, channels specific to each base station. To host multiple connected device users, a channel is attributed to each user. With the rise in the number of connected objects, there will not be enough sub-bands available to host them all.

To mitigate this problem, Catherine Douillard and Charbel Abdel Nour, telecommunications researchers at IMT Atlantique, have been working on NOMA: a system that places multiple users on the same channel, unlike the current system. “Rather than allocating a frequency band to each user, device signals are superposed on the same frequency band,” explains Douillard.

Sharing resources

The essential idea of NOMA involves making a single antenna work to serve multiple users at the same time,” says Abdel Nour. And to go even further, the researchers are working on the Power-Domain NOMA, “an approach that aims to separate users sharing the same frequency on one or more antennas, according to their transmitting power,” continues Douillard. This system provides more equitable access to spectrum resources and available antennas across users. Typically, when a device encounters difficulties in accessing the network, it may try to access a resource already occupied by another user. However, the antenna transmitting power will be adapted so that the information sent by the device successfully arrives at its destination, while limiting ‘disturbances’ for the user.

Superposing multiple users on the same resource will cause problems in accessing it. For communication to work, the signals sent by the machine need to be received at sufficiently different strengths, so that the antennas can identify them. If the signal strengths are similar, the antennas will mix them up. This can cause interference, in other words, information jamming phenomena, which can hinder the smooth playing of a video or a round of an online game.

Interference: an intrinsic problem for NOMA

To avoid interference, receivers are fitted with decoders, which differentiate between signals according to their reception quality. When the antenna receives the signals, it identifies the one with the best reception quality and proceeds to extract the signal received. It will then recover the lower-quality signal. Once the signals are identified, the base station gives each one access to the network. “This means of handling interference is quite simple to implement in the case of two signals, but much less so when there are many,” states Douillard.

 “To handle interference, there are two main possibilities,” explains Abdel Nour. “One involves canceling the interference, or in other words, the device receivers detect which signals are not intended for them and eliminate them, keeping only those sent to them,” adds the researcher. This approach can be facilitated by interference models, namely those studied at IMT Nord Europe. The second solution involves making the antennas work together. By exchanging information about the quality of connections, they can implement algorithms to determine which devices should be served by NOMA, while avoiding interference from their signals.

Intelligent allocation

We are trying to ensure that resource allocation techniques adapt to user needs, while adjusting the power they need, with no excess,” states Abdel Nour. According to the number of users and the applications being used, the number of antennas in play will vary. If a lot of machines are trying to access the network, multiple antennas can be used at the same time. In the opposite situation, a single antenna could be enough.

Thanks to algorithms, the base stations learn to recognize the different characteristics of devices, like the kinds of applications being used when the device is connected. This allows the intensity of the signal emitted by the antennas to be adapted, in order to serve users appropriately. For example, a streaming service will need a higher bit rate, and therefore a stronger transmitting power than a messaging application.

One of the challenges is to design high-performing algorithms that are energy-efficient,” explains Abdel Nour. By reducing energy consumption, the objective is to generate lower operating costs than the current network architecture, while allowing for a significant rise in the number of connected users. NOMA and other research into interference are part of an overall approach to increase network hosting capability. With the developments in the Internet of Things in particular, this work will prove to be necessary to avoid information traffic jams.

Rémy Fauvel

AI-4-Child “Chaire” research consortium: innovative tools to fight against childhood cerebral palsy

In conjunction with the GIS BeAChild, the AI-4-Child team is using artificial intelligence to analyze images related to cerebral palsy in children. This could lead to better diagnoses, innovative therapies and progress in patient rehabilitation. But also a real breakthrough in medical imaging.

The original version of this article was published on the IMT Atlantique website, in the News section.

Cerebral palsy is the leading cause of motor disability in children, affecting nearly two out of every 1,000 newborns. And it is irreversible. The AI-4-Child chaire (French research consortium), managed by IMT Atlantique and the Brest University Hospital, is dedicated to fighting this dreaded disease, using artificial intelligence and deep learning, which could eventually revolutionize the field of medical imaging.

“Cerebral palsy is the result of a brain lesion that occurs around birth,” explains François Rousseau, head of the consortium, professor at IMT Atlantique and a researcher at the Medical Information Processing Laboratory (LaTIM, INSERM unit). “There are many possible causes – prematurity or a stroke in utero, for example. This lesion, of variable importance, is not progressive. The resulting disability can be more or less severe: some children have to use a wheelchair, while others can retain a certain degree of independence.”

Created in 2020, AI-4-Child brings together engineers and physicians. The result of a call for ‘artificial intelligence’ projects from the French National Research Agency (ANR), it operates in partnership with the company Philips and the Ildys Foundation for the Disabled, and benefits from various forms of support (Brittany Region, Brest Metropolis, etc.). In total, the research program has a budget of around €1 million for a period of five years.

Chaire AI-4-Child, François Rousseau
François Rousseau, professor at IMT Atlantique and head of the AI-4-Child chaire (research consortium)

Hundreds of children being studied in Brest

AI-4-Child works closely with BeAChild*, the first French Scientific Interest Group (GIS) dedicated to pediatric rehabilitation, headed by Sylvain Brochard, professor of physical medicine and rehabilitation (MPR). Both structures are linked to the LaTIM lab (INSERM UMR 1101), housed within the Brest CHRU teaching hospital. The BeAChild team is also highly interdisciplinary, bringing together engineers, doctors, pediatricians and physiotherapists, as well as psychologists.

Hundreds of children from all over France and even from several European countries are being followed at the CHRU and at Ty Yann (Ildys Foundation). By bringing together all the ‘stakeholders’ – patients and families, health professionals and imaging specialists – on the same site, Brest offers a highly innovative approach, which has made it a reference center for the evaluation and treatment of cerebral palsy. This has enabled the development of new therapies to improve children’s autonomy and made it possible to design specific applications dedicated to their rehabilitation.

“In this context, the mission of the chair consists of analyzing, via artificial intelligence, the imagery and signals obtained by MRI, movement analysis or electroencephalograms,” says Rousseau. These observations can be made from the fetal stage or during the first years of a child’s life. The research team is working on images of the brain (location of the lesion, possible compensation by the other hemisphere, link with the disability observed, etc.), but also on images of the neuro-musculo-skeletal system, obtained using dynamic MRI, which help to understand what is happening inside the joints.

‘Reconstructing’ faulty images with AI

But this imaging work is complex. The main pitfall is the poor quality of the images collected, due to camera shake or artifacts during the shooting. So AI-4-Child is trying to ‘reconstruct’ them, using artificial intelligence and deep learning. “We are relying in particular on good quality views from other databases to achieve satisfactory resolution,” explains the researcher. Eventually, these methods should be able to be applied to routine images.

Significant progress has already been made. A doctoral student is studying images of the ankle obtained in dynamic MRI and ‘enriched’ by other images using AI – static images, but in very high resolution. “Despite a rather poor initial quality, we can obtain decent pictures,” notes Rousseau.  Significant differences between the shapes of the ankle bone structure were observed between patients and are being interpreted with the clinicians. The aim will then be to better understand the origin of these deformations and to propose adjustments to the treatments under consideration (surgery, toxin, etc.).

The second area of work for AI-4-Child is rehabilitation. Here again, imaging plays an important role: during rehabilitation courses, patients’ gait is filmed using infrared cameras and a system of sensors and force plates in the movement laboratory at the Brest University Hospital. The ‘walking signals’ collected in this way are then analyzed using AI. For the moment, the team is in the data acquisition phase.

Several areas of progress

The problem, however, is that a patient often does not walk in the same way during the course and when they leave the hospital. “This creates a very strong bias in the analysis,” notes Rousseau. “We must therefore check the relevance of the data collected in the hospital environment… and focus on improving the quality of life of patients, rather than the shape of their bones.”

Another difficulty is that the data sets available to the researchers are limited to a few dozen images – whereas some AI applications require several million, not to mention the fact that this data is not homogeneous, and that there are also losses. “We have therefore become accustomed to working with little data,” says Rousseau. “We have to make sure that the quality of the data is as good as possible.” Nevertheless, significant progress has already been made in rehabilitation. Some children are able to ride a bike, tie their shoes, or eat independently.

In the future, the AI-4-Child team plans to make progress in three directions: improving images of the brain, observing bones and joints, and analyzing movement itself. The team also hopes to have access to more data, thanks to a European data collection project. Rousseau is optimistic: “Thanks to data processing, we may be able to better characterize the pathology, improve diagnosis and even identify predictive factors for the disease.”

* BeAChild brings together the Brest University Hospital Centre, IMT Atlantique, the Ildys Foundation and the University of Western Brittany (UBO). Created in 2020 and formalized in 2022 (see the French press release), the GIS is the culmination of a collaboration that began some fifteen years ago on the theme of childhood disability.

digital sovereignty

Sovereignty and digital technology: controlling our own destiny

Annie Blandin-Obernesser, IMT Atlantique – Institut Mines-Télécom

Facebook has an Oversight Board, a kind of “Supreme Court” that rules on content moderation disputes. Digital giants like Google are investing in the submarine telecommunications cable market. France has had to back pedal after choosing Microsoft to host the Health Data Hub.

These are just a few examples demonstrating that the way in which digital technology is developing poses a threat not only to the European Union and France’s economic independence and cultural identity. Sovereignty itself is being questioned, threatened by the digital world, but also finding its own form of expression there.

What is most striking is that major non-European digital platforms are appropriating aspects of sovereignty: a transnational territory, i.e. their market and site where they pronounce norms, a population of internet users, a language, virtual currencies, optimized taxation, and the power to issue rules and regulations. The aspect that is unique to the digital context is based on the production and use of data and control over information access. This represents a form of competition with countries or the EU.

Sovereignty in all its forms being questioned

The concept of digital sovereignty has matured since it was formalized around ten years ago as an objective to “control our own destinies online”. The current context is different to when it emerged. Now, it is sovereignty in general that is seeing a resurgence of interest, or even souverainism (an approach that prioritizes protecting sovereignty).

This topic has never been so politicized. Public debate is structured around themes such as state sovereignty regarding the EU and EU law, economic independence, or even strategic autonomy with regards to the world, citizenship and democracy.

In reality, digital sovereignty is built on the basis of digital regulation, controlling its material elements and creating a democratic space. It is necessary to take real action, or else risk seeing digital sovereignty fall hostage to overly theoretical debates. This means there are many initiatives that claim to be an integral part of sovereignty.

Regulation serving digital sovereignty

The legal framework of the online world is based on values that shape Europe’s path, specifically, protecting personal data and privacy, and promoting general interest, for example in data governance.

The text that best represents the European approach is the General Data Protection Regulation (GDPR), adopted in 2016, which aims to allow citizens to control their own data, similar to a form of individual sovereignty. This regulation is often presented as a success and a model to be followed, even if it has to be put in perspective.

New European digital legislation for 2022

The current situation is marked by proposed new digital legislation with two regulations, to be adopted in 2022.

It aims to regulate platforms that connect service providers and users or offer services to rank or optimize content, goods or services offered or uploaded online by third parties: Google, Meta (Facebook), Apple, Amazon, and many others besides.

The question of sovereignty is also present in this reform, as shown by the debate around the need to focus on GAFAM (Google, Amazon, Facebook, Apple and Microsoft).

On the one hand, the Digital Markets Act (the forthcoming European legislation) includes strengthened obligations for “gatekeeper” platforms, which intermediate and end-users rely on. This affects GAFAM, even if it may be other companies that are concerned – like Booking.com or Airbnb. It all depends on what comes out of the current discussions.

And on the other hand, the Digital Services Act is a regulation for digital services that will structure the responsibility of platforms, specifically in terms of the illegal content that they may contain.

Online space, site of confrontation

Having legal regulations is not enough.

“The United States have GAFA (Google, Amazon, Facebook and Apple), China has BATX (Baidu, Alibaba, Tencent and Xiaomi). And in Europe, we have the GDPR. It is time to no longer depend solely on American or Chinese solutions!” declared French President Emmanuel Macron during an interview on December 8 2020.

Interview between Emmanuel Macron and Niklas Zennström (CEO of Atomico). Source: Atomico on Medium.

The international space is a site of confrontation between different kinds of sovereignty. Every individual wants to truly control their own digital destiny, but we have to reckon with the ambition of countries that demand the general right to control or monitor their online space, such as the United States or China.

The EU and/or its member states, such as France, must therefore take action and promote sovereign solutions, or else risk becoming a “digital colony”.

Controlling infrastructure and strategic resources

With all the focus on intermediary services, there is not enough emphasis placed on the industrial dimension of this topic.

And yet, the most important challenge resides in controlling vital infrastructure and telecommunications networks. The question of submarine cables, used to transfer 98% of the world’s digital data, receives far less media attention than the issue of 5G devices and Huawei’s resistance. However, it demonstrates the need to promote our cable industry in the face of the hegemony of foreign companies and the arrival of giants such as Google or Facebook in the sector.

The adjective “sovereign” is also applied to other strategic resources. For example, the EU wants to secure its supply of semi-conductors, as currently, it depends on Asia significantly. This is the purpose of the European Chips Act, which aims to create a European ecosystem for these materials. For Ursula von der Leyen, “it is not only a question of competitiveness, but also of digital sovereignty.”

There is also the question of a “sovereign” cloud, which has been difficult to implement. There are many conditions required to establish sovereignty, including the territorialization of the cloud, trust and data protection. But with this objective in mind, France has created the label SecNumCloud and announced substantial funding.

Additionally, the adjective “sovereign” is used to describe certain kinds of data, for which states should not depend on anyone for their access, such as geographic data. In a general way, a consensus has been reached around the need to control data and access to information, particularly in areas where the challenge of sovereignty is greatest, such as health, agriculture, food and the environment. Development of artificial intelligence is closely connected to the status of this data.

Time for alternatives

Does all that mean facilitating the emergence of major European or national actors and/or strategic actors, start-ups and SMEs? Certainly, such actors will still need to show good intentions, compared to those that shamelessly exploit personal data, for example.

A pure alternative is difficult to bring about. This is why partnerships develop, although they are still highly criticized, to offer cloud hosting for example, like the collaboration between Thales and OVHcloud in October 2021.

On the other hand, there is reason to hope. Open-source software is a good example of a credible alternative to American private technology firms. It needs to be better promoted, particularly in France.

Lastly, cybersecurity and cyberdefense are critical issues for sovereignty. The situation is critical, with attacks coming from Russia and China in particular. Cybersecurity is one of the major sectors in which France is greatly investing at present and positioning itself as a leader.

Sovereignty of the people

To conclude, it should be noted that challenges relating to digital sovereignty are present in all human activities. One of the major revelations occurred in 2005, in the area of culture, when Jean-Noël Jeanneney observed that Google had defied Europe by creating Google Books and digitizing the continent’s cultural heritage.

The recent period reconnects with this vision, with cultural and democratic issues clearly essential in this time of online misinformation and its multitude of negative consequences, particularly for elections. This means placing citizens at the center of mechanisms and democratizing the digital world, by freeing individuals from the clutches of internet giants, whose control is not limited to economics and sovereignty. The fabric of major platforms is woven from the human cognitive system, attention and freedom. Which means that, in this case, the sovereignty of the people is synonymous with resistance.

Annie Blandin-Obernesser, Law professor, IMT Atlantique – Institut Mines-Télécom

This article was republished from The Conversation under the Creative Commons license. Read the original article here (in French).

Cleaning up polluted tertiary wastewater from the agri-food industry with floating wetlands

In 2018, IMT Atlantique researchers launched the FloWAT project, based on a hydroponic system of floating wetlands. It aims to reduce polluting emissions from treated wastewater into the discharge site.

Claire Gérente, researcher at IMT Atlantique, has been coordinating the FloWat1 decontamination project, funded by the French National Agency for Research (ANR), since its creation. The main aim of the initiative is to provide complementary treatment for tertiary wastewater from the agri-food industry, using floating wetlands. Tertiary wastewater is effluent that undergoes a final phase in the water treatment process to eliminate residual pollutants. It is then drained into the discharge site, an aquatic ecosystem where treated wastewater is released.

These wetlands act as filters for particle and dissolved pollutants. They can easily be added to existing waste stabilization pond systems in order to further treat this water. One of this project’s objectives is to improve on conventional floating wetlands to increase phosphorus removal, or even collect it for reuse, thereby reducing the pressure on this non-renewable resource.

In this context, research is being conducted around the use of a particular material, cellular concrete, to allow phosphorus to be recovered. “Phosphorus removal is of great environmental interest, particularly as it reduces the eutrophication of natural water sources that are discharge sites for treated effluent,” states Gérente. Eutrophication is a process characterized by an increase in nitrogen and phosphorus concentration in water, leading to ecosystem disruption.

Floating wetlands: a nature-based solution

The floating wetland system involves covering an area of water, typically a pond, with plants placed on a floating bed, specifically sedges. The submerged roots act as filters, retaining the pollutants found in the water via various physical, chemical and biological processes. This mechanism is called phytopurification.

Floating wetlands are part of an approach known as nature-based solutions, whereby natural systems, less costly than conventional technologies, are implemented to respond to ecological challenges. To function efficiently, the most important thing is to “monitor that the plants are growing well, as they are the site of decontamination,” emphasizes Gérente.

In order to meet the project objectives, a pilot study was set up on an industrial abattoir and meat processing site. After being biologically treated, real agri-food effluent is discharged into four pilot ponds, three of which that are covered with floating wetlands of various sizes, and one that is uncovered, as a control. The experimental site is entirely automated and can be controlled remotely to facilitate supervision.

Performance monitoring is undertaken for the treatment of organic matter, nitrogen, phosphorus and suspended matter. As well as data on the incoming and outgoing water quality, physico-chemical parameters and climate data are constantly monitored. The outcome for pollutants in the different components of the treatment system will be identified by sampling and analysis of plants, sediment and phosphorus removal material.

These floating wetlands will be the first to be easy to dismantle and recycle, improved for phosphorus removal and even collection, as well as able to treat suspended matter, carbon pollution and nutrients.

L’attribut alt de cette image est vide, son nom de fichier est MF-2.jpg.
Photograph of the experimental system

Improving compliance with regulation

In 1991, the French government established a limit on phosphorus levels to reduce water pollution, in order to preserve biodiversity and prevent algal bloom, which is when one or several algae species grow rapidly in an aquatic system.

The floating wetlands developed by IMT Atlantique researchers could allow these thresholds to be better respected, by improving capacities for water treatment. Furthermore, they are part of a circular economy approach, as beyond collecting phosphorus for reuse, the cellular concrete and polymers used as plant supports are recyclable or reusable.

Further reading on I’MTech: Circular economy, environmental assessment and environmental budgeting

To create these wetlands, you simply have to place the plants on the discharge ponds. This makes this technique cheap and easy to implement. However, while such systems integrate rather well into the landscape, they are not suitable for all environments. The climate in northern countries, for example, may slow down or impair how the plants function. Furthermore, results take longer to obtain with natural methods like floating wetlands than with conventional methods. Nearly 7000 French agri-food companies have been identified as potential users for these floating wetlands. Nevertheless, the FloWAT coordinator reminds us that “this project is a feasability study, our role is to evaluate the effectiveness of floating wetlands as a filtering system. We will have to wait until the project finishes in 2023 to find out if this promising treatment system is effective.

Rémy Fauvel

soins, care

Hospitals put to the test by shocks

Benoît Demil, I-site Université Lille Nord Europe (ULNE) and Geoffrey Leuridan, IMT Atlantique – Institut Mines-Télécom

The Covid-19 crisis has put lasting strain on the health care system, in France and around the world. Hospital staff have had to deal with increasing numbers of patients, often in challenging conditions in terms of equipment and space: a shortage of masks and protective equipment initially, then a lack of respirators and anesthetics, and more recently, overloaded intensive care units.

Adding to these difficulties, logistical problems have exacerbated shortage problems. Under these extreme conditions, and despite all the difficulties, the hospital system has withstood and absorbed the shock of the crisis. “The hospital system did not crack under pressure,” as stated by Étienne Minvielle and Hervé Dumez, co-authors of a report on the French hospital management system during the Covid-19 crisis.

While it is unclear how long such a feat can be maintained, and at what price, we may also ask questions about the resilience and reliability of the health care system. In other words, how can care capacity be maintained at a constant quality when the organization is under extreme pressure?

We sought to understand this in a study conducted over 14 months during a non-Covid period, with the staff of a critical care unit of a university hospital center.

High reliability organizations

The concepts of resilience and reliability, which have become buzzwords in the current crisis, have been studied extensively for over 30 years in organizational science research  – more particularly those focusing on High Reliability Organizations (HRO).

This research has offered insights into the mechanisms and factors that enable complex sociotechnical systems to maintain safety and a constant quality of service, although the risk of failure remains possible, with serious consequences.

The typical example of an HRO is an aircraft carrier. We know that deference to expertise and skills within a working group, permanent learning routines and training explain how it can ensure its primary mission over time. But much less is known about how the parties involved manage the resources required for their work, and how this management affects resilience and reliability.

Two kinds of situations

In a critical care unit, activity is continuous but irregular, both quantitatively and qualitatively. Some days are uneventful, with a low number of patients, common disorders and diseases, and care that does not present any particular difficulties. The risks of the patients’ health deteriorating are of course still present, but remain under control. This is the most frequently-observed context: 80 of the 92 intervention situations recorded and analyzed in our research relate to such a context.

At times, however, activity is significantly disrupted by a sudden influx of patients (for example, following a serious automobile accident), or by a rapid and sudden change in a patient’s condition. The tension becomes palpable within the unit, movements are quicker and more precise, conversations between health care workers are brief and focused on what is happening.

In both cases, observations show differentiated management of resources, whether human, technical or relating to space. To understand these differences, we must draw on a concept that has long existed in organizational theory: organizational slack, which was brought to light in 1963 by Richard Cyert and James March.

Slack for shocks

This important concept in the study of organizations refers to excess resources in relation to optimal operations. Organizations or their members accumulate this slack to handle multiple demands, which may be competing at times.

The life of organizations offers a multitude of opportunities for producing and using slack. Examples include the financial reserves a company keeps on hand “just in case”, the safety stock a production manager builds up, the redundancy of certain functions or suppliers, the few extra days allowed for a project, oversized budgets negotiated by a manager to meet his year-end targets etc. All of these practices, which are quite common in organizations, contribute to resilience in two ways.

First, they make it possible to avoid unpredictable shocks, such as the default of a subcontractor, an employee being out on sick leave,  an unforeseen event that affects a project or a machine breaking down. Moreover, in risk situations, they prevent the disruption of the sociotechnical system by maintaining it in a non-degraded environment.

Second, these practices absorb the adverse effects of shocks when they arise unexpectedly – whether due to a strike or the sudden arrival of patients in an emergency unit.

How do hospitals create slack?

Let us first note that in a critical care unit, the staff produces and uses slack all the time. It comes from negotiations that the head of the department has with the hospital administration to obtain and defend the spaces and staff required for the unit to operate as effectively as possible. These negotiations are far from the everyday care activity, but are crucial for the organization to run effectively.

At the operational level, health care workers also free up resources quickly, in particular in terms of available beds, to accommodate new patients who arrive unexpectedly.  The system for managing the order of priority for patients and their transfer is a method commonly used to ensure that there is always an excess of available resources.

In most cases, these practices of negotiation and rapid rotation of resources make it possible for the unit to handle situations that arise during its activity. At times, however, due to the very nature of the activity, such practices may not suffice. How do health care workers manage in such situations?

Constant juggling

Our observations show that other practices offset the temporary lack of resources.

Examples include calling in the unit’s day staff as well as night staff, or others from outside the unit to “lend a hand”, reconfiguring the space to create an additional bed with the necessary technical equipment or negotiating a rapid transfer of patients to other departments.  

This constant juggling allows health care workers to handle emergency situations that may otherwise overwhelm them and put patients lives in danger. For them, the goal is to make the best use of the resources available, but also to produce them locally and temporarily when required by emergency situations.

Are all costs allowed?

The existence of slack poses a fundamental problem for organizations – in particular those whose activity requires them to be resilient to ensure a high degree of reliability. Keeping unutilized resources on hand “just in case” goes against a managerial approach that seeks to optimize the use of resources, whether human, financial or equipment  – as called for by New Public Management since the 1980s, in an effort to lower the costs of public services.

This approach has had a clear impact on the health care system, and in particular on the French hospital system over the past two decades, as the recent example of problems with strategic stocks of masks at the beginning of the Covid pandemic unfortunately illustrated .

Beyond the hospital, military experts have recently made the same observation, noting that “economic concerns in terms of defense, meaning efficiency, are a very recent idea,” which “conflicts with the military notions of ‘reserve,’ ‘redundancy’ and ‘escalation of force,’ which are essential to operational effectiveness and to what is now referred to as resilience.”

Of course, this quest for optimization does not only apply to public organizations. But it often goes hand in hand with greater vulnerability of the sociotechnical systems involved. In any case, this was observed during the health crisis, in light of the optimization implemented at the global level to reduce costs in companies’ supply chains. 

To understand this, one only needs to look at the recent stranding of the Ever Given. Blocked for a week in the Suez Canal, this giant container paralyzed 10% of global trade for a week. What lessons can be learned  from this?

A phenomenon made invisible in emergencies

First of all, it is important for organizations aiming for high reliability to keep in mind that maintaining slack has a cost, and that that they must therefore identify the systems or sub-systems for which resilience must absolutely be ensured.  The difference between slack that means wasting resources and slack that allows for resilience is a very fine line.

Bearing this cost calls for education efforts, since it must not only be fully agreed to by all of the stakeholders, but also justified and defended.

Lastly, the study we conducted in a critical care unit showed that while slack is produced in part during action, it disappears once a situation has stabilized. 

This phenomenon is therefore largely invisible to managers of hospital facilities. While these micro-practices may not be measured by traditional performance indicators, they nevertheless contribute significantly: this might not be a new lesson, but it is worth repeating to ensure that it is not forgotten.

Benoît Demil, professor of strategic management, I-site Université Lille Nord Europe (ULNE) and Geoffrey Leuridan, research professor, IMT Atlantique – Institut Mines-Télécom

This article has been republished from The Conversation under a Creative Commons license. Read the  original article (in French).

fission spin, nucléaire

Nuclear fission reveals new secrets

Almost 80 years after the discovery of nuclear fission, it continues to unveil its mysteries. The latest to date: an international collaboration has discovered what makes the fragments of nuclei spin after fission. This offers insights into how atom nuclei work and into improving our future nuclear power plants.

Take the nuclei of uranium-238 (the ones used in nuclear power plants), bombard them with neutrons, and watch how they break down into two nuclei of different sizes. Or, more precisely, observe how these fragments spin. This is, in short, the experiment conducted by researchers from 37 institutes in 16 countries, led by the Irène Joliot-Curie Laboratory in Orsay, in the Essonne department. Their findings, which offer insights into nuclear fission, have been published in the journal Nature. Several French teams took part in this discovery.  

The mystery of spinning nuclei

But why is there a need to conduct this kind of experiment? Don’t we understand fission perfectly, since the phenomenon was discovered in the late 1930s by German chemists Otto Hahn and Fritz Strassmann, and Austrian physicist Lise Meitner? Aren’t there hundreds of nuclear fission reactors around the world, that allow us to understand everything? In a word – no. Some mysteries still remain, and among them is the spin of nucleus fragments.  The spin is the equivalent, in the quantum world, of angular momentum. This is more or less how the nucleus spins like a top.

Even when the original nucleus is not spinning, the nuclei resulting from fission still spin. How do they acquire this angular momentum? What generates this rotation? Up to now, there had been two competing hypotheses. The first, supported by the majority of physicists, was that this spin is created before fission. In this case, there must be a correlation between the spins of the two fragments. The second was that the spin of the fragments is caused after fission, and that these spins are therefore independent of each other. The findings by the 37 teams are decisive: the second hypothesis is correct.

184 detectors and 1,200 hours of radiation

We have to think of the nucleus like a liquid drop,” explains Muriel Fallot, a researcher at Subatech (a joint laboratory affiliated to IMT Atlantique, CNRS and University of Nantes), who took part in the experiment. “When it is struck by the neutron, it splits and each fragment is deformed, like a drop if it received an impact. It is when the fragment attempts to return to its spherical shape to acquire greater stability that the energy released is converted into heat and rotational energy.”

To achieve these results, the teams irradiated not only uranium-238, but also thorium-232, two nuclei that can split when they collide with a neutron (this is referred to as fissile nuclei). And this was carried out over 1,200 hours, between February and June 2018. These fragments dissipate the energy accumulated in the form of gamma radiation.  This is detected using 184 detectors placed around the bombarded nuclei.  Yet, depending on the fragments’ spin, the photons do not arrive at the same angle. An analysis of the radiation therefore makes it possible to trace the fragments’ spin. These experiments were conducted at the ALTO accelerator located in Orsay.  

Better understanding the strong interaction

These findings, which offer important insights into the fundamental physics of nuclear fission, will now be analyzed by theoretical physicists from around the world. Certain theoretical models will have to be abandoned, while others will incorporate this data to explain fission quantitatively. They should physicists to better predict the stability of radioactive nuclei.

Today, we are able to predict the lifetime of some heavy nuclei, but not all of them,” says Muriel Fallot. “The more unstable they are, the less we are able to predict them. This research will help us better understand the strong interaction, that which binds the protons and neutrons within the nuclei. Because this strong interaction depends on the spin.”

Applications for reactors of the future

This new knowledge will help researchers working on producing nuclei that are “exotic,”  very heavy,  or with a large excess of protons compared to neutrons (or the reverse). Will these findings lead to the production of new, even heavier nuclei? They would provide food for thought for theorists to further understand nuclear interactions within nuclei.

In addition to being of interest at the fundamental level, these findings have important applications for the nuclear industry.  In a nuclear power plant, a nucleus obtained from fission and which “spins quickly” gives off a lot of energy in the form of gamma radiation.  This can damage certain materials such as fuel sheaths. Yet, “We don’t know how to accurately predict this energy dissipation. There is up to a 30% gap between the calculations and the experiments,” says Muriel Fallot. “That has an impact on the design of these materials.”  While current reactors are managed well based on the experience acquired, these findings will be especially useful for more innovative future reactors.

Cécile Michaut

Thermiup

ThermiUp: a new heat recovery device

ThermiUP helps meet the challenge of energy-saving in buildings. This start-up, incubated at IMT Atlantique, is set to market a device that transfers heat from grey water to fresh water. Its director, Philippe Barbry, gives us an overview of the system.

What challenges does the start-up ThermiUp help meet?

Philippe Barbry: Saving energy is an important challenge from a societal point of view, but also in terms of regulations. In the building industry, there are increasingly strict thermal regulations. The previous regulations were established in 2012, while the next ones will come into effect in 2022 and will include CO2 emissions related to energy consumption. New buildings must meet current regulations. Our device reduces energy needs for heating domestic water, and therefore helps real estate developers and social housing authorities comply with regulations.

What is the principle behind ThermiUP?

PB: It’s a device that exchanges energy between grey water, meaning little-polluted waste water from domestic use, and fresh water. The exchanger is placed as close as possible to the domestic water outlet so that this water loses a minimum of heat energy. The exchanger connects the water outlet pipe with that of the fresh water supply.

On average, water from a shower is at 37°C and cools down slightly at the outlet: it is around 32°C when it arrives in our device. Cold water is at 14°C on average. Our exchanger preheats it to 25°C. Showers represent approximately 80% of the demand for domestic hot water and the exchanger makes it possible to save a third of the energy required for the total domestic hot water production.

Is grey water heat recovery an important energy issue in the building sector?

PB: Historically, most efforts have focused on heating and insulation for buildings. But great strides have been made in this sector and these aspects now account for only 30% of energy consumption in new housing units. As a result, domestic hot water now accounts for 50% of these buildings’ energy consumption.  

What is the device’s life expectancy?

PB: That’s one of the advantages of our exchanger: its life expectancy is equivalent to that of a building, which is considered to be 50 years. It’s a passive system, which doesn’t require electronics,  moving parts or a motor. It is based simply on the laws of gravity and energy transformation. It can’t break down, which represents a significant advantage for real estate developers. ThermiUP reduces energy demand and can also be compatible with other systems such as solar.  

How does your exchanger work?

PB: It is not a traditional heat plate exchanger, since that would get dirty too quickly. Our research and development was based on other types of exchangers. It is a device made of copper, which is an easily recycled material. We optimized the prototype for exchange and its geometry along with its industrial manufacturing technique for two years at IMT Atlantique. But I can’t say more about that until it becomes available on the market in the next few months.

Do you plan to implement this device in other types of housing than new buildings?

PB: For now, with our device, we only plan to target the new building market which is a big market since there are approximately 250,000 multiple dwelling housing units a year in France. In the future, we’ll work on prototypes for individual houses as well as for the renovation sector.

Learn more about ThermiUp

By Antonin Counillon

nucléaire

Three Mile Island, Chernobyl, Fukushima: the role of accidents in nuclear governance

Stéphanie TillementIMT Atlantique – Institut Mines-Télécom and Olivier BorrazSciences Po

Until the 1970s, nuclear power plants were considered to be inherently safe, by design. Accidents were perceived as being highly unlikely, if not impossible, by designers and operators, in spite of recurring incidents that were not publicized.

This changed abruptly in 1979 with the Three Mile Island (TMI) accident in the United States. It was given wide media coverage, despite the fact that there were no casualties, and demonstrated that what were referred to as “major” accidents were possible, with a meltdown in this case.

The decades that followed have been marked by the occurrence of two other major accidents rated as level 7 on the INES (International Nuclear Event) scale: Chernobyl in 1986 and Fukushima in 2011.

Turning point in the 1980s

This article will not address this organization or the invention, in the wake of the Chernobyl accident, of the  INES scale used to rank events that jeopardize safety on a graduated scale, ranging from a deviation from a standard to a major accident.

Our starting point will be the shift that occurred in 1979, when accidents changed from being seen as unconceivable to a possible event, considered and described by nuclear experts as an opportunity for learning and improvement.  

Accidents therefore provide an opportunity to “learn lessons” in order to enhance nuclear safety and strive for continuous improvement.

But what lessons precisely? Has the most recent accident, Fukushima, led to profound changes in nuclear risk governance, as Chernobyl did?

The end of the human error rationale

Three Mile Island is often cited as the first nuclear accident: despite the technical and procedural barriers in place at the time, the accident occurred – such an accident was therefore possible.

Some, such as sociologist Charles Perrow, even described it as “normal,” meaning inevitable, due to the complexity of nuclear facilities and their highly coupled nature – meaning that the components that make up the system are closely interconnected – which are likely to lead to hard-to-control “snowball effects.”

For institutional, industrial and academic experts, the analysis of the accident changed views on man’s role in these systems and on human error: accidents went from being a moral problem, attributable to humans’ “bad behavior”, to a systematic problem, attributable to poor system design.

Breaking with the human error rationale, these lessons paved the way for the systematization of learning from experience, promoting a focus on transparency and learning.  

Chernobyl and risk governance

It was with Chernobyl that accidents became “organizational,” leading nuclear organizations and public authorities to introduce structural reforms of safety doctrines, based on recognition of the essential nature of “organizational and cultural problems […] for the safety of operations.” (AIEA, 1999).

Chernobyl also marked the beginning of major changes in risk governance arrangements at the international, European and French levels. An array of organizations and legal and regulatory provisions were introduced, with the twofold aim of learning from the accident that occurred at the Ukrainian power plant and preventing such an accident from happening elsewhere.

The law of 13 June 2006 on “Nuclear Transparency and Safety” (referred to as TSN) proclaiming, among other things, the ASN’s status as an administrative authority independent from the government, is one emblematic example.

A possibility for every country

25 years after Chernobyl, Japan experienced an accident at its Fukushima-Daiichi power plant.

Whereas the accident that occurred in 1986 could be attributed in part to the Soviet regime and its RBMK technology, the 2011 catastrophe involved American-designed technology and a country that many considered to be at the forefront of modernity.

With Fukushima, a serious accident once again became a possibility that no country could rule out. And yet, it did not give rise to the same level of mobilization as that of 1986.  

Fukushima – a breaking point?

Ten years after the Japanese catastrophe, it can be said that it did not bring about any profound shifts – whether in the way facility safety is designed, managed and monitored, or in the plans and arrangements designed to manage a similar crisis in France (or in Europe).

This has been shown in the research carried out through the Agoras research project.

As far as preparedness for crisis management is concerned, Fukushima led to a re-examination of the temporal boundaries between the emergency phase and the post-accident phase, and for greater investment in the latter.

This catastrophe also led the French authorities to publish a preparedness plan in 2014 for managing a nuclear accident, making it a part of the common crisis management system.

These two aspects are reflected in the strengthening of the public safety portion of the national crisis management exercises carried out annually in France.   

But, as underscored by recent research, the observation of these national exercises did not reveal significant changes, whether in the way they are organized and carried out, the content of plans and arrangements, or, more generally, in the approach to a crisis caused by a major accident – with the exception of the creation of national groups that can intervene quickly on site (FARN).

Limited changes

It may, of course, be argued that, like the effects of the Three Mile Island and Chernobyl accidents, structural transformations take time and it may still be too early to observe a lack of significant change.

But the research carried out through the Agoras project leads us to put forward the hypothesis that changes remain limited, based on two reasons.

The first reason comes from the fact that structural changes were initiated in the 20 years following the Chernobyl  accident. This period saw the rise of organizations dedicated to accident prevention and crisis management preparedness, such as the ASN in France, and European (WENRA, ENSREG) and international cooperation organizations.

These organizations initiated continuous research on nuclear accidents, gradually developing tools for  understanding and responding to accidents, as well as mechanisms for coordination between public officials and industry leaders at the national and international levels.

These tools were “activated” following the Fukushima accident and made it possible to quickly provide an explanation for the accident, launch shared procedures such as supplementary safety assessments (the  much-discussed “stress tests”), and collectively propose limited revisions to nuclear safety standards.

This work contributed to normalizing the accident, by bringing it into existing organizations and frameworks for thinking about nuclear safety.

This helped establish the conviction, among industry professionals and French public authorities, that the  governance regime in place was capable of preventing and responding to a large-scale event, without the need to profoundly reform it.

The inertia of the French system

A second reason comes from the close relationships in France between the major players in the civil nuclear sector (operators – EDF primarily – and regulators – the ASN and its technical support organization IRSN), in particular with regard to establishing and assessing safety measures at power plants.

These relationships form an exceptionally stable organized action system. The Fukushima accident provided a short window of opportunity to impose additional measures on operators.

Read more: L’heure des comptes a sonné pour le nucléaire français (Time for a Reckoning in the French Nuclear Industry)

But this window closed quickly, and the action system returned to a stable state. The inertia of this system can be seen in the production of new regulatory instruments, the development and upgrading of which take several years.   

It can also be seen in the organization of crisis management exercises, which continue to perpetuate distinctions between safety and security, accident and crisis, the facility interiors and the environment, and more generally, between technical and political considerations – distinctions that preserve the structure and content of relationships between regulators and operators.

Learning from accidents

Like Chernobyl, Fukushima was first viewed as an exceptional event: by insisting on the perfect storm of a tsunami of unprecedented magnitude and a nuclear power plant, highlighting the lack of an independent regulatory agency in Japan, insisting on the excessive respect for hierarchy among the Japanese, the aim was to construct a unique event so as to suggest that it could not happen in the same way in other parts of the world.

But, at the same time, a normalization process took place, in France in particular, focusing not as much on the event itself, as on the risks it posed for the organization of the nuclear industry, meaning stakeholders and forms of knowledge with legitimacy and authority.

The normalization process led to the accident being included in the existing categories, institutions and systems, in order to demonstrate their ability to prevent such an accident from happening and to limit the impact, should such an accident occur.

This was the result of efforts to delineate the boundaries, with some parties seeking to maintain them and others disputing them and trying to change them.

Ultimately, the boundaries upheld so strongly by industry stakeholders (operators and regulators) – between technical and political considerations, between experts and laymen – were maintained.

Relentlessly questioning nuclear governance

While the Fukushima accident was taken up by political and civil society leaders to challenge the governance of the nuclear industry and its “closed-off” nature, operators and regulators in France and throughout Europe quickly took steps to demonstrate their ability both to prevent such an accident, and to manage the consequences, in order to suggest that they could continue to be entrusted with regulating this sector.

As far as making the sector more open to civil society players is concerned, this movement was initiated well before the Fukushima accident (with the TSN Law in 2006, notably), and was, at best, the continuation of a pre-existing trend.

But other boundaries seem to have emerged or been strengthened in recent years, especially between technical factors and human and organizational factors, or safety requirements and other requirements for nuclear organizations (economic and industrial performance in particular), although it is not exactly clear whether this is related to the accidents.

These movements go hand in hand with a bureaucratization of relationships between the regulator and its technical expert, and between these two parties and operators, and require further research in order to investigate their effects on the foundations of nuclear risk governance.

Talking and listening to one another

As like causes produce like effects, it is indeed the fact that the nuclear industry is unreceptive to any “uncomfortable knowledge” – based on the idea introduced by Steve Rayner – that is the problem.

Social science research has long demonstrated that in order to solve complex problems, a wide range of individuals from various backgrounds and training must be brought together, for research that transcends disciplinary and institutional boundaries.

Social science researchers, engineers and public authorities must talk to – and more importantly – listen to one another. For engineers and policy-makers, that means being ready to take into account facts or knowledge that may challenge established doctrines and arrangements and their legitimacy.  

And social science researchers must be ready to go and see nuclear organizations, to get a first-hand look at their day-to-day operations, listen to industry stakeholders and observe working situations.

But our experience, in particular through Agoras, has shown us that not only is such work time-consuming and costly, it is also fraught with pitfalls. For even when one stakeholder does come to see the soundness of certain knowledge, the highly interconnected nature of relationships with other industry stakeholders, who make up the governance system, complicates the practical implementation of this knowledge, and therefore prevents major changes from being made to governance arrangements.

Ultimately, the highly interconnected nature of the nuclear industry’s governance system is arguably one of the vulnerabilities.  

Stéphanie Tillement, Sociologist, IMT Atlantique – Institut Mines-Télécom and Olivier Borraz, CNRS Research Director – Centre for the Sociology of Organisations, Sciences Po

This article has been republished from The Conversation under a Creative Commons license. Read the  original article (in French).