Chantal Morley

Institut Mines-Télécom Business School | Management, Information systems, Gender studies

Professor Chantal Morley is a faculty member of Institut Mines-Télécom Business School. She holds a PhD in Information Systems Management from HEC-Paris, and an accreditation to direct research (HDR) from IAE-Montpellier II. She has previously served as a consultant on IT projects (Steria, CGI). She has published several books on project management, and Information Systems modelling. After graduating from EHESS (Sociology of Gender), she has been working since 2005 on gender and information technology in the research group Gender@IMT. Her research topics focus on: male gendering of the computer field, dynamics of IT stereotyping, women inclusion in digital occupations, and feminist approach in research. In 2018, she has developed a MOOC on Gender Diversity in Digital Occupations.

[toggle title=”Find here her articles on I’MTech” state=”open”]

[/toggle]

Indoor air

Indoor Air: under-estimated pollutants

While some sources of indoor air pollution are well known, there are others that researchers do not yet fully understand. This is the case for cleaning products and essential oils. The volatile organic compounds (VOCs) they become and their dynamics within buildings are being studied by chemists at IMT Lille Douai.

When it comes to air quality, staying indoors does not keep us safe from pollution. “In addition to outdoor pollutants, which enter buildings, there are the added pollutants from the indoor environment! A wide variety of volatile organic compounds are emitted by building materials, paint and even furniture,” explains Marie Verriele Duncianu, researcher in atmospheric chemistry at IMT Lille Douai. Compressed wood combined with resin, which is often used to make indoor furniture, is one of the leading sources of formaldehyde. In fact, indoor air is generally more polluted than outdoor air. This observation is not new, it has been the focus of numerous information campaigns by environmental agencies, including ADEME and the OQAI, the monitoring center for the quality of indoor air. However, the recent results of much academic research tend to show that the sources of indoor pollutants are still underestimated, and the emissions are poorly known.

In addition to sources from construction and interior design, many compounds are emitted by the occupants’ activities,” the researcher explains. Little research has been conducted on sources of volatile organic compounds such as cleaning products, cooking activities, and hygiene and personal care products. Unlike their counterparts produced by furniture and building materials, these pollutants originating from resident’s products are much more dynamic. While a wall constantly emits small quantiles of VOCs, a cleaning product spontaneously emits a quantity up to ten times more concentrated. This rapid emission makes the task of measuring the concentrations and defining the sources much more complex.

Since they are not as well known, these pollutants linked to users are also less controlled. “They are not taken into account in regulations at all,” explains Marie Verriele Duncianu. “The only legislation related to this issue is legislation for nursery schools and schools, and legislation requiring a label for construction materials.” Since 1st January 2018, institutions receiving children and young people are required to monitor the concentrations of formaldehyde and benzene in their indoor air. However, no actions have been imposed regarding the sources of these pollutants. Meanwhile, ADEME has issued a series of recommendations that advocate the use of green cleaning products for cleaning floors and buildings.

The green product paradox

These recommendations come at a time when consumers are becoming increasingly responsible in terms of their purchases, including for cleaning products. Certain cleaning products benefit from an Ecolabel, for example, guaranteeing a smaller environmental footprint. However, the impacts of these environmentally friendly products in terms of pollutant emissions has not been studied any more than it has for their label-free counterparts. Supported by marketing arguments alone, products featuring essential oils are being hailed as beneficial, without any evidence to back them up. Simply put, Researchers do not yet have a good understanding of indoor pollution, traditional cleaning products or those presented as green products. However, it is fairly easy to find false information claiming the opposite.

In fact, it was upon observing received ideas and “miracle” properties on consumer websites that Marie Verriele Duncianu decided to start a new project called ESSENTIEL.  “My fellow researchers and I saw statements claiming that essential oils purified the indoor air,” the researcher recalls. “On some blogs, we even read consumer testimonials of how essential oils eliminate pollutants. It’s not true: while they do have the ability to clean the environment in terms of bacteria, they definitely do not eliminate all air pollutants. On the contrary, they add more!”

In the laboratory, the researchers are studying the behavior of products featuring essential oils. What VOCs do they release? How are they distributed in indoor air?

 

Essential oils are in fact high in terpenes. These molecules are allergenic, particularly for the skin. They can also interact with ozone to form fine particles or formaldehyde. In focusing on essential oils and the molecules they release into the air; the ESSENTIAL project wants to help remedy this lack of knowledge about indoor pollutants. Therefore, the researchers are pursuing two objectives: understand how emissions from essential oil volatile organic compounds behave, and determine the risks related to these emissions.

The initial results show unusual emission dynamics. For floor cleaners, “there is a peak concentration of terpenes during the first half-hour following use,” explains Shadia Angulo Milhem, PhD student participating in the project with Marie Verriele Duncianu’s team. “Furthermore, the concentration of formaldehyde begins to regularly increase four hours after the cleaning activity.” Formaldehyde is a very controlled substance because it is an irritant and is carcinogenic in cases of high and repeated exposure. The concentrations measured up to several hours after the use of the cleaning products containing essential oils can be attributed to two factors. First of all, terpenes react with the ozone to create formaldehyde. Secondly, the decomposition of formaldehyde donors, used as preservatives, and biocide contained in the cleaning products.

A move towards regulatory thresholds?

In the framework of the ESSENTIAL project, researchers have not only measured cleaning products containing essential oils. They also studied diffusion devices for essential oils. The results show characteristic emissions for each device. “Reed diffusers, which are small bottles containing wooden sticks, take several hours to reach full capacity” Shadia Angulo Milhem explains. “The terpene concentrations then stabilize and remain constant for several days.” Vaporizing devices, on the other hand, which heat the oils, have a more spontaneous emission, resulting in terpene concentrations that are less permanent in the home.

In addition to the measurements of the concentrations, the dynamics of the volatile organic compounds that are released is difficult to determine. In some buildings, they can be trapped in porous materials, then released later due to changes in humidity and temperature. One of the areas the researchers want to explore in the future is how they are absorbed by indoor surfaces. Understanding the behavior of pollutants is essential in establishing the risks they present. How dangerous a compound is depends on whether it is dispersed quickly in the air or accumulates for several days in paint or in drop ceilings.

Currently, there are no regulatory thresholds for terpene concentrations in the air, due to a lack of knowledge about the public’s exposure and about long and short-term toxicity. We must keep in mind that the risk associated with exposure to a pollutant depends on the toxicity of the compound, its concentration in the air and the duration of contact. Upon completion of the ESSENTIAL project, anticipated for 2020, the project team will provide ADEME with a technical and scientific report. While waiting for legislation to be introduced, the results should at least offer recommendation sheets on the use of products containing essential oils. This will provide consumers with real information regarding the benefits as well as the potentially harmful effects of the products they purchase, a far cry from pseudo-scientific marketing arguments.

New multicast solutions could significantly boost communication between cars.

Effective communication for the environments of the future

Optimizing communication is an essential aspect of preparing for the uses of tomorrow, from new modes of transport to the industries of the future. Reliable communications are a prerequisite when it comes to delivering high quality services. Researchers from EURECOM, in partnership with The Technical University of Munich are working together to tackle this issue, developing new technology aimed at improving network security and performance.

 

In some scenarios involving wireless communication, particularly in the context of essential public safety services or the management of vehicular networks, there is one vital question: what is the most effective way of conveying the same information to a large number of people? The tedious solution would involve repeating the same message over and over again to each individual recipient, using a dedicated channel each time. A much quicker way is what is known as multicast. This is what we use when sending an email to several people at the same time, or when a news anchor is reading us the news. The sender of the information only provides it once, disseminating it via a means enabling them to duplicate it and to send it through communication channels capable of reaching all recipients.

In addition to TV news broadcasts, multicasts are particularly useful for networks comprising machines or objects set to follow on from the arrival of 5G and its future applications. This is the case, for example, with vehicle networks. “In a scenario where cars are all connected to one another, there is a whole bunch of useful information that could be shared with them using multicast technology”, explains David Gesbert, head of the Communication Systems department at EURECOM. “This could be traffic information, notifications about accidents, weather updates, etc.” The issue here is that, unlike TV sets, which do not move about while we are trying to watch the news, cars are mobile.

The mobile nature of recipients means that reception conditions are not always optimal. When driving through a tunnel, behind a large apartment block or when we’re taking our car out of the garage, it will be difficult for communication to reach our car. Despite these constraints – which affect multiple drivers at the same time – we need to be able to receive messages in order for the information service to operate effectively. “The transmission speed of the multicast has to be slowed down in order for it to be able to function with the car located in the worst reception scenario”, explains David Gesbert. What this means is that the flow rate must be lower or more power deployed for all users of the network. Just 3 cars going through a tunnel would be enough to slow down the speed at which potentially thousands of cars receive a message.

Communication through cooperation

For networks with thousands of users, it is simply not feasible to restrict the distribution characteristics in this way. In order to tackle this problem, David Gesbert and his team entered into a partnership with the Technical University of Munich (TUM) within the framework of the German-French Academy for the Industry of the Future. These researchers from France and Germany set themselves the task of devising a solution for multicast communication that would not be constrained by this “worst car” problem. “Our idea was as follows: we restrict ourselves to a small percentage of reception terminals which receive the message, but in order to offset that, we ensure that these same users are able to retransmit the message to their neighbors”, he explains. In other words: in your garage, you might not receive the message from the closest antenna, but the car out on the street 30 feet in front of your house will and will then be able to send it efficiently over a short distance.

Researchers from EURECOM and the TUM were thus able to develop an algorithm capable of identifying the most suitable vehicles to target. The message is first transmitted to everyone. Depending on whether or not reception is successful, the best candidates are selected to pass on the rest of the information. Distribution is then optimized for these vehicles through the use of the MIMO technique for multipath propagation. These vehicles will then be tasked with retransmitting the message to their neighbors through vehicle to vehicle communication. The tests carried out on these algorithms indicate a drop in network congestion in certain situations. “The algorithm doesn’t provide much out in the country, where conditions tend mostly to be good for everyone”, outlines David Gesbert. “In towns and cities, on the other hand, the number of users in poor reception conditions is a handicap for conventional multicasts, and it is here that the algorithm really helps boost network performance”.

The scope of these results extends beyond car networks, however. One other scenario in which the algorithm could be used is for the storage of popular content, such as videos or music. “Some content is used by a large number of users. Rather than going to search for them each time a request is made within the core network, these could be stored directly on the mobile terminals of users”, explains David Gesbert. In this scenario, our smartphones would no longer need to communicate with the operator’s antenna in order to download a video, but instead with another smartphone with better reception in the area onto which the content has already been downloaded.

More reliable communication for the uses of the future

The work carried out by EURECOM and the TUM into multicast technology has its roots in a more global project, SeCIF (Secure Communications for the Industry of the Future). The various industrial sectors set to benefit from the rise in communication between objects need reliable communication. Adding machine-to-machine communication to multicasts is just one of the avenues explored by the researchers. “At the same time, we have also been taking a closer look at what impact machine learning could have on the effectiveness of communication”, stresses David Gesbert.

Machine learning is breaking through into communication science, providing researchers with solutions to design problems for wireless networks. “Wireless networks have become highly heterogeneous”, explains the researcher. “It is no longer possible for us to optimize them manually because we have lost the intuition in all of this complexity”. Machine learning is capable of analyzing and extracting value from complex systems, enabling users to respond to questions that are too difficult to understand.

For example, the French and German researchers are looking at how 5G networks are able to optimize themselves autonomously depending on network usage data. In order to do this, data on the quality of the radio channel has to be fed back from the user terminal to the decision center. This operation takes up bandwidth, with negative repercussions for the quality of calls and the transmission of data over the Internet, for example. As a result, a limit has to be placed on the quantity of information being fed back. “Machine learning enables us to study a wide range of network usage scenarios and to identify the most relevant data to feed back using as little bandwidth as possible”, explains David Gesbert. Without machine learning “there is no mathematical method capable of tackling such a complex optimization problem”.

The work carried out by the German-French Academy will be vital when it comes to preparing for the uses of the future. Our cars, our towns, our homes and even our workplaces will be home to a growing number of connected objects, some of which will be mobile and autonomous. The effectiveness of communications is a prerequisite to ensuring that the new services they provide are able to operate effectively.

 

[box type=”success” align=”” class=”” width=””]

The research work by EURECOM and TUM on multicasting mentionned in this article has been published during the International Conference on Communications (ICC). It received the best paper award (category: Wireless communications) during the event, which is a highly competitive award in this scientific field.

[/box]

domain name

Domain name fraud: is the global internet in danger?

Hervé Debar, Télécom SudParis – Institut Mines-Télécom, Université Paris-Saclay

[divider style=”normal” top=”20″ bottom=”20″]

[dropcap]I[/dropcap]n late February 2019, the Internet Corporation for Assigned Names and Numbers (ICANN), the organization that manages the IP addresses and domain names used on the web, issued a warning on the risks of systemic Internet attacks. Here is what you need to know about what is at stake.

What is the DNS?

The Domain Name Service (DNS) links a domain name (for example, the domain ameli.fr for French health insurance) to an IP (Internet Protocol) address, in this case “31.15.27.86”). This is now an essential service, since it makes it easy to memorize the identifiers of digital services without having their addresses. Yet, like many former types of protocol, it was designed to be robust, but not secure.

 

DNS defines the areas within which an authority will be free to create domain names and communicate them externally. The benefit of this mechanism is that the association between the IP address and the domain name is closely managed. The disadvantage is that several inquiries are sometimes required to resolve a name, in other words, associate it with an address.

Many organizations that offer internet services have one or several domain names, which are registered with the suppliers of this registration service. These service providers are themselves registered, directly or indirectly with ICANN, an American organization in charge of organizing the Internet. In France, the reference organization is the AFNIC, which manages the “.fr” domain.

We often refer to a fully qualified domain name, or FQDN. In reality, the Internet is divided into top-level domains (TLD). The initial American domains made it possible to divide domains by type of organization (commercial, university, government, etc.). Then national domains like “.fr” quickly appeared. More recently, ICANN authorized the registration of a wide variety of top-level domains. The information related to these top-level domains is saved within a group of 13 servers distributed around the globe to ensure reliability and speed in the responses.

The DNS protocol establishes communication between the user’s machine and a domain name server (DNS). This communication allows this name server to be queried to resolve a domain name, in other words, obtain the IP address associated with a domain name. The communication also allows other information to be obtained, such as finding a domain name associated with an address or finding the messaging server associated with a domain name in order to send an electronic message. For example, when we load a page in our browser, the browser performs a DNS resolution to find the correct address.

Due to the distributed nature of the database, often the first server contacted does not know the association between the domain name and the address. It will then contact other servers to obtain a response, through an iterative or recursive process, until it has queried one of the 13 root servers. These servers form the root level of the DNS system.

To prevent a proliferation of queries, each DNS server locally stores the responses received that associate a domain name and address for a few seconds. This cache makes it possible to respond more quickly if the same request is made within a brief interval.

Vulnerable protocol

DNS is a general-purpose protocol, especially within company networks. It can therefore allow an attacker to bypass their protection mechanisms to communicate with compromised machines. This could, for example, allow the attacker to control the networks of robots (botnets). The defense response relies on the more specific filtering of communications, for example requiring the systematic use of a DNS relay controlled by the victim organization. The analysis of the domain names contained in the DNS queries, which are associated with black or white lists, is used to identify and block abnormal queries.

abdallahh/Flickr, CC BY

The DNS protocol also makes denial of service attacks possible. In fact, anyone can issue a DNS query to a service by taking over an IP address. The DNS server will respond naturally to the false address. The address is in fact the victim of the attack, because it has received unwanted traffic. The DNS protocol also makes it possible to carry out amplification attacks, which means the volume of traffic sent from the DNS server to the victim is much greater than the traffic sent from the attacker to the DNS server. It therefore becomes easier to saturate the victim’s network link.

The DNS service itself can also become the victim of a denial of service attack, as was the case for DynDNS in 2016. This triggered cascading failures, since certain services rely on the availability of DNS in order to function.

Protection against denial of service attacks can take several forms. The most commonly used today is the filtering of network traffic to eliminate excess traffic. Anycast is also a growing solution for replicating the attacked services if needed.

Cache Poisoning

A third vulnerability that was widely used in the past is to attack the link between the domain name and IP address. This allows an attacker to steal a server’s address and to attract the traffic itself. It can therefore “clone” a legitimate service and obtain the misled users’ sensitive information: Usernames, passwords, credit card information etc. This process is relatively difficult to detect.

As mentioned above, the DNS servers have the capacity to store the responses to the queries they have issued for a few minutes and to use this information to respond to the subsequent queries directly. The so-called cache poisoning attack allows an attacker to falsify the association within the cache of a legitimate server. For example, an attacker can flood the intermediate DNS server with queries and the server will accept the first response corresponding to its request.

The consequences only last a little while, the queries made to the compromised server are diverted to an address controlled by the attacker. Since the initial protocol does not include any means for verifying the domain-address association, the customers cannot protect themselves against the attack.

This often results in internet fragments, with customers communicating with the compromised DNS server being diverted to a malicious site, while customers communicating with other DNS servers are sent to the original site. For the original site, this attack is virtually impossible to detect, except for a decrease in traffic flows. This decrease in traffic can have significant financial consequences for the compromised system.

Security certificates

The purpose of the secure DNS (Domain Name System Security Extensions, DNSSEC) is to prevent this type of attack by allowing the user or intermediate server to verify the association between the domain name and the address. It is based on the use of certificates, such as those used to verify the validity of a website (the little padlock that appears in a browser web bar). In theory, a verification of the certificate is all that is needed to detect an attack.

However, this protection is not perfect. The verification process for the “domain-IP address” associations remains incomplete. This is partly because a number of registers have not implemented the necessary infrastructure. Although the standard itself was published nearly fifteen years ago, we are still waiting for the deployment of the necessary technology and structures. The emergence of services like Let’s Encrypt has helped to spread the use of certificates, which are necessary for secure navigation and DNS protection. However, the use of these technologies by registers and service providers remains uneven; some countries are more advanced than others.

Although residual vulnerabilities do exist (such as direct attacks on registers to obtain domains and valid certificates), DNSSEC offers a solution for the type of attacks recently denounced by ICANN. These attacks rely on DNS fraud. To be more precise, they rely on the falsification of DNS records in the register databases, which means either these registers are compromised, or they are permeable to the injection of false information. This modification of a register’s database can be accompanied by the injection of a certificate, if the attacker has planned this. This makes it possible to circumvent DNSSEC, in the worst-case scenario.

This modification of DNS data implies a fluctuation in the domain-IP address association data. This fluctuation can be observed and possibly trigger alerts. It is therefore difficult for an attacker to remain completely unnoticed. But since these fluctuations can occur on a regular basis, for example when a customer changes their provider, the supervisor must remain extremely vigilant in order to make the right diagnosis.

Institutions targeted

In the case of the attacks denounced by ICANN, there were two significant characteristics. First of all, they were active for a period of several months, which implies that the strategic attacker was determined and well-equipped. Secondly, they effectively targeted institutional sites, which indicates that the attacker had a strong motivation. It is therefore important to take a close look at these attacks and understand the mechanisms the attackers implemented in order to rectify the vulnerabilities, probably by reinforcing good practices.

ICANN’s promotion of the DNSSEC protocol raises questions. It clearly must become more widespread. However, there is no guarantee that these attacks would have been blocked by DNSSEC, nor even that they would have been more difficult to implement. Additional analysis will be required to update the status of the security threat for the protocol and the DNS database.

[divider style=”normal” top=”20″ bottom=”20″]

Hervé Debar, Head of the Networks and Telecommunication Services Department at Télécom SudParis, Télécom SudParis – Institut Mines-Télécom, Université Paris-Saclay

The original article (in French) has been published in The Conversation under a Creative Commons license.

noise

Without noise, virtual images become more realistic

With increased computing capacities, computer-generated images are becoming more and more realistic. Yet generating these images is very time-consuming. Tamy Boubekeur, specialized in 3D Computer Graphics at Télécom ParisTech, is on a quest to solve this problem. He and his team have developed new technology that relies on noise-reduction algorithms and saves computing resources while offering high-quality images.

 

Have you ever been impressed by the quality of an animated film? If you are familiar with cinematic video games or short films created with computer-generated images, you probably have. If not, keep in mind that the latest Star Wars and Fantastic Beasts and Where to Find Them movies were not shot on a satellite superstructure the size of a moon or by filming real magical beasts. The sets and characters in these big-budget films were primarily created using 3D models of astonishing quality. One of the many examples of these impressive graphics: the demonstration by the team from Unreal Engine, a video game engine, at the Game Developers Conference last March. They worked in collaboration with Nvidia and ILMxLAB to create a fictitious scene from Star Wars created using only computer-generated images, for all the characters and sets.

 

To trick viewers, high-quality images are crucial. This is an area Tamy Boubekeur and his team from Télécom ParisTech specialize in. Today, most high-quality animation is produced using a specific type of computer-generated image: photorealistic computer generation using path tracing. This method begins with a 3D model of the desired scene, with the structures, objects and people. Light sources are then placed in the artificial scene: the sun outside, or lamps inside. Then paths are traced starting from the camera—what will be projected on the screen from the viewer’s vantage point—and moving towards the light source. These are the paths light takes as it is reflected off the various objects and characters in the scene. Through these reflections, the changes in the light are associated with each pixel in the image.

This principle is based on the laws of physics and Helmholtz’s principle of reciprocity, which makes it possible to ‘trace the light’ using the virtual sensor,” Tamy Boubekeur explains. Each time the light bounces off objects in the scene, the equations governing the light’s behavior and the properties of the modeled materials and surfaces define the path’s next direction. The spread of the modeled light therefore makes it possible to capture all the changes and optical effects that the eye perceives in real life. “Each pixel in the image is the result of hundreds or even thousands of paths of light in the simulated scene,” the researcher explains. The final color of the pixel is then generated by computing the average of the color responses from each path.

Saving time without noise

The problem is, achieving a realistic result requires a tremendous number of paths. “Some scenes require thousands of paths per pixel and per image: it takes a week of computing to generate the image on a standard computer!” Tamy Boubekeur explains. This is simply too long and too expensive. A film contains 24 images per second. In one year of computing, less than two seconds of a film would be produced on a single machine. Enter noise-reduction algorithms—specifically those developed by the team from Télécom ParisTech. “The point is to stop the calculations before reaching thousands of paths,” the researcher explains. “Since we have not gone far enough in the simulation process, the image still contains noise. Other algorithms are used to remove this noise.” The noise alters the sharpness of the image and is dependent on the type of scene, the materials, lighting and virtual camera.

Research on noise has been carried out and has flourished since 2011. Today, many algorithms exist based on different approaches. Competition is fierce in the quest to achieve satisfactory results. What is at stake in the achieved performance? The programs’ capacity to reduce calculation times and produce a final result without noise. The Bayesian collaborative denoiser (BCD) technology, developed by Tamy Boubekeur’s team, is particularly effective in achieving this goal. Developed from 2014 to 2017 as part of Malik Boudiba’s thesis, the algorithms used in this technology are based on a unique approach.

Normally, noise removal methods attempt to guess the amount of noise present in a pixel based on properties in the observed scene—especially its visible geometry—in order to remove it. “We recognized that the properties of the scene being observed could not account for everything,” Tamy Boubekeur explains. “The noise also originates from areas not visible in the scene, from materials reflecting the light, the semi-transparent matter the light passes through or properties of the modeled optics inside the virtual camera.” A defocused background or a window in the foreground can create varying degrees of noise in the image. The BCD algorithm therefore only takes into account the color values associated with the hundreds of paths calculated before the simulation is stopped, just before the values are averaged into a color pixel. “Our model estimates the noise associated with a pixel based on the distribution of these values, by analyzing similarities with the properties of other pixels and removes the noise from them all at once,” the researcher explains.

A sharp image of Raving Rabbids

The BCD technology was developed as part of the PAPAYA project launched as part of the French National Fund for Digital Society. The project was led in partnership with Ubisoft Motion Pictures to define the key challenges in terms of noise-reduction for professional animation. The company was really impressed by the algorithms in the BCD technology and integrated them into its graphics production engine, Shining. It then used them to produce its animated series, Raving Rabbids. “They liked that our algorithms work with any type of scene, and that the technology is integrated without causing any interference,” Tamy Boubekeur explains. The BCD noise-remover does not require any changes in image calculation methods and can easily be integrated into systems and teams that already have well-established tools.

The source code for the technology has been published in open source on Github. It is freely available, particularly for animation film professionals who prefer open technology over the more rigid proprietary technology. An update to the code integrates an interactive preview module that allows users to adjust the algorithm’s parameters, thus making it easier to optimize the computing resources.

The BCD technology has therefore proven its worth and has now been integrated into several rendering engines. It offers access to high-quality image synthesis, even for those with limited resources. Tamy Boubekeur reminds us that a film like Disney’s Big Hero 6 contains approximately 120,000 images, requires 200 million hours of computing time and the use of thousands of processors to be produced in a reasonable timeframe. For students and amateur artists, these technical resources are inaccessible. Algorithms like those used in the BCD technology offer them the hope of more easily producing very high-quality films. And the team from Télécom ParisTech is continuing its research to even further reduce the amount of computing time required. Their objective: develop new light simulation calculation distribution methods using several low-capacity machines.

[divider style=”normal” top=”20″ bottom=”20″]

Illustration of BCD denoising a scene, before and after implementing the algorithm

 

competition

Coopetition between individuals, little-understood interactions

Mehdi Elmoukhliss, Institut Mines-Télécom Business School and Christine Balagué, Institut Mines-Télécom Business School

[divider style=”normal” top=”20″ bottom=”20″]

[dropcap]C[/dropcap]oopetition is a concept used in the field of management science (especially in strategy), originally used to describe situations in which organizations (companies, clubs etc.) simultaneously cooperate and compete with one another, as paradoxical as that may seem. A recent article in The Conversation pointed to the potential role of coopetition in evolution, underscoring that it can be found in the animal kingdom (to explain the evolution of species) as much as in companies and organizations.

We would like to provide another perspective here by highlighting the fact that coopetition can be observed in relationships between individuals, which opens up a wide range of potential applications.

A few examples

A variety of situations can be considered as examples of coopetition between individuals. In companies, for example, how many colleagues cooperate, while knowing that only one of them will become the boss in the event of a promotion? When he was serving as Minister of the Economy under François Hollande while secretly preparing to run for president, was Emmanuel Macron not in coopetition with the President of the Republic, since he had to cooperate with his rival?

Relationships between individuals are rarely archetypal (purely cooperative or purely competitive). They are often mixed, hybrid, simultaneously cooperative and competitive. Inter-individual coopetition is even a hiring technique in human resource management: some recruiters interview candidates by asking them to work together on a project only to select certain candidates to continue in the interview process.

Inter-individual coopetition can also be seen in the scientific world, where researchers often cooperate with others to carry out a study, while competing with one another in terms of career or prestige. Online, a number of platforms (for crowdsourcing ideas for example) seek to promote cooperation between users while making them compete with one another to identify “the best contributors,” for example. Coopetition also occurs in the world of sports. In football or cycling, athletes must sometimes cooperate to win, while competing to become the “star” of the game or race. In basketball, the famous Shaquille O’Neal–Kobe Bryant duo helped the Lakers win three consecutive titles between 1999 and 2003, despite the rivalry between the two players.

And the rivalry between these players continues today.

But coopetition is not a sort of interdependence reserved for the “ruthless” worlds of business, politics, research, competition for ideas or competitive sports. Consider the example of mushroom lovers. Many of them communicate on forums or social networks. In these virtual communities, members exchange advice (for example opinions about the toxicity of mushrooms) as well as important information about locations where highly-coveted mushrooms grow. While amateur and experienced mycologists collaborate to identify zones of interest, the information exchanged is intentionally vague. Members indicate their geographic area but rarely specify the slope, altitude and even less so the GPS coordinates! The information they share is enough to help others without “letting the mushrooms” out of the bag.

A forgotten model

Coopetition, like cooperation and competition, appears to be an observable phenomenon in a wide range of social situations. It is not a new ideological ideal but rather a “forgotten” model for collective action. It is not unique to contemporary western societies either. Anthropologist Margaret Mead’s research showed that certain indigenous tribes are based on “varying” degrees of cooperation and competition.

Surprisingly, this possibility has received little research attention. As pointed out by Paul Chiambaretto and Anne-Sophie Fernandez or Julien Granata in The Conversation, this can be explained by a cultural approach, specific to the western world, anchored in philosophical views in which cooperation and competition are seen as opposites.

Further reading: Coopétition, moteur de l’évolution des espèces (Coopetition, a driving force for the evolution of species)

In social psychology – one major area for studies on cooperation and competition between individuals – Morton Deutsch’s research led to developing the theory of social interdependence in 1949, which is now considered to be the theory of reference on cooperation and competition between individuals. One of the assumptions of this structuralist theory is that mixed situations are common but are of little theoretical interest, since they will always be guided by a dominant mechanism (cooperation or competition).

Deutsch adds that these situations are, in any event, sub-optimal. As a result, studies on cooperation and competition in psychology have primarily adopted an either/or approach to cooperation and competition. Yet, the opposition assumed by Morton Deutsch has not been formally proven, and for many psychological researchers, this assumption should be challenged. Although this limitation was originally pointed out in the 1960s, several decades would go by before social sciences researchers started working on this topic, showing how coopetition between individuals differs from the two traditional models.

What we know

Emerging research on inter-individual coopetition focuses primarily on companies and virtual platforms, which have been studied in laboratory experiments. This research shows that coopetition between individuals boosts their creativity in a variety of contexts, whether in face-to-face or online situations. Far from being counterproductive, this duality has certain benefits.

Research carried out in companies shows that inter-individual coopetition does not hinder learning in teams. Although little is known about how this particular organizational method impacts individuals, it has been shown that employees do not all react to coopetition in the same way: some easily accept the situation in which they find themselves and know how to “play the game” with great skill, while others find it more difficult and ultimately “choose a side” – cooperate or compete. Inter-individual coopetition can also create tension and governance issues, which may be resolved in part through a new management style better suited to “coopetitive” teams.

The risks of inter-individual coopetition

Despite the level of enthusiasm for these little-studied situations, the risks of inter-individual coopetition must not be ignored. It raises some important questions:

  • Does it not open the door to widespread suspicion, and to paranoia? Does coopetition between individuals not create an unhealthy atmosphere? How can tension and ambivalence be handled?
  • Is coopetition not conducive to conflicts of interest, which are harmful to team dynamics? Is it not a question of paradoxical demands likely to give rise to anxiety and psychosomatic disorders. And what role does Machiavellianism play in these situations?
  • In what cases are the results of coopetition worse than those which would have been obtained through a purely cooperative and/or purely competitive approach?

In other words, the conditions for genuinely constructive, socially-positive coopetition must still be established, to ensure that it is not detrimental to individuals’ health or to group dynamics.

A radical change of perspective

Still, hybrid situations are common and in some cases they prove to be useful. For the philosopher Pierre Lévy, who evokes “competitive cooperation” and “cooperative competition”, inter-individual coopetition is even “the preferred way of organizing collective intelligence.” This promising new research area requires further studies in order to confirm the usefulness of coopetitive inter-individual systems by studying their benefits and potentially harmful effects in greater detail.

More fundamentally, the idea of coopetition between individuals proposes a radical change of perspective: competition is not the opposite of cooperation and these two types of interdependence can be combined. This sheds new light on how we function as individuals and in groups and suggests a more nuanced understanding of human relationships. It is exciting on an intellectual level and represents a potential source of innovation in fields such as management, education or digital technology.

[divider style=”normal” top=”20″ bottom=”20″]

Mehdi Elmoukhliss, PhD student in Management Sciences and expert in collective intelligence systems, Institut Mines-Télécom Business School and Christine Balagué, Professor and Head of the Smart Objects and Social Networks Chair, Institut Mines-Télécom Business School

The original version of this article (in French) was published on The Conversation and republished under a Creative Commons license. Read the original article.

 

MOx

MOx Strategy and the future of French nuclear plants

Nicolas Thiollière, a researcher in nuclear physics at IMT Atlantique, and his team are assessing various possibilities for the future of France’s nuclear power plants. They seek to answer the following questions: how can the quantity of plutonium in circulation in the nuclear cycle be reduced? What impacts will the choice of fuel — specifically MOx — have on nuclear plants? To answer to these questions, they are using a computer simulator that models different scenarios: CLASS (Core Library for Advanced Scenario Simulation).

 

Today, the future of French nuclear power plants remains uncertain. Many reactors are coming to the end of their roughly forty-year lifespan. New proof of concept trials must be carried out to extend their duration of use. In his quest to determine which options are viable, Nicolas Thiollière, a researcher at IMT Atlantique with the Subatech laboratory, and his team are conducting nuclear scenario studies. In this context, they are working to assess future options for France’s nuclear power plants.

Understanding the nuclear fuel cycle

The nuclear fuel cycle encompasses all the steps in the nuclear energy process, from uranium mining to managing the radioactive waste. UOx fuel, which stands for uranium oxide, represents roughly 90% of the fuel used in the 58 pressurized water reactors in French nuclear power plants. It consists of uranium enriched in uranium 235. After a complete cycle, i.e. after it has passed through the reactor, the radiation generates approximately 4% fission products (matter used to produce energy), 1% plutonium and 0.1% minor actinides. In most countries, these components are not recycled; this is referred to as an open cycle.

However, France has adopted a partially closed cycle, in which the plutonium is reused. Therefore, the plutonium is not considered waste, despite being the element with the highest radiotoxicity. In other words, it is the most hazardous of the nuclear cycle materials in the medium-to-long-term, for thousands to millions of years. France has a plutonium recycling system based on MOx fuel, which stands for “mixed oxide”. “MOx is fuel that consists of 5% to 8% plutonium produced during the UOx combustion cycle and supplemented by the depleted uranium,” Nicolas Thiollière explains.

The use of this new mixed fissile material helps slightly reduce the consumption of uranium resources. In the nuclear power plants in France, MOx fuel represents approximately 10% of total fuel—the rest is UOx fuel. After a MOx irradiation cycle, 3% to 5% of the remaining plutonium is not considered waste and could theoretically be reused. In practice, however, it currently is not reused. MOx fuel must therefore be stored for processing, forming a strategic reserve of plutonium. “We estimate that there were approximately 350 tons of plutonium in the French nuclear cycle in 2018. The majority is located in used UOx and MOx fuel,” explains Nicolas Thiollière. Thanks to their simulations, the researchers estimate that with an open cycle—without recycling using MOx —there would be approximately 16% more plutonium in 2020 than is currently projected with a closed cycle.

The fast neutron reactor strategy

In the current pressurized water reactors, the natural uranium must first be enriched. 8 mass units of natural uranium are needed to produce 1 unit of enriched uranium. In the reactor, only 4% of the fuel’s mass undergoes fission and produces energy. Directly or indirectly fissioning the entire mass of the natural uranium would result in resource gains by a factor 20. In practice, this involves multi-recycling the plutonium produced by the neutron absorption of uranium during irradiation to continuously incinerate it. One of the possible industrial solutions is the use of Fast Neutron Reactors (FNR). FNRs rely on the use of fast neutrons that offer the advantage of fissioning the plutonium more effectively, thus enabling it to be recycled several times.

Historically, the development of MOx fuel was part of a long-term industrial plan based on multi-recycling the plutonium used in FNRs. Now, a completely different story is in the making. Although three FNRs were used in France beginning in the 1960s (Rapsodie, Phénix and Superphénix), the permanent suspension decision for Superphénix by the Council of State in 1997 signaled the end of the expansion of FNRs in France. The three pioneer reactors were shut down, and no FNRs have been operated since. However, the Act of 2006 on the sustainable development of radioactive materials and waste revitalized the project by setting a goal for commissioning an FNR prototype by 2020. The ASTRID project, led by the CEA (The French Alternative Energies and Atomic Energy Commission), took shape.

Recently, funding for this reactor with its pre-industrial power level (approximately 600 megawatts compared to 1 gigawatt for an industrial reactor) has been scaled down. The power of the ASTRID concept, significantly reduced to 100 megawatts, redefines its status and will probably extend the industrial development potential of FNRs beyond 2080. “Without the perspective of deploying FNRs, the MOx strategy is called into question. The industrial processing of plutonium is a cumbersome and expensive process resulting in limited gains in terms of the inventory and resource if the MOx is only used in the current reactors,” Nicolas Thiollière observes.

In this context of uncertainty regarding the deployment of FNRs and as plutonium accumulates in the cycle, Nicolas Thiollière and his team are asking a big question. Under what circumstances can nuclear power plants multi-recycle (recycle more than once) plutonium using the current reactors and technology to stabilize inventory? In practice, major research and development efforts would be required to define a new type of fuel assembly compatible with multi-recycling. “Many theoretical studies have already been carried out by nuclear industry operators, revealing a few possibilities to explore,” the researcher explains.

Nuclear scenario studies: simulating different courses of action for nuclear power plants

Baptiste Mouginot and Baptiste Leniau, former researchers with Subatech laboratory, developed the cycle simulator CLASS (Core Library for Advanced Scenarios Simulations) from 2012 to 2016. This modeling tool can scientifically assess future strategies for the fuel cycle. It can therefore be used to calculate and monitor the inventory and flow of materials over time for all nuclear plant units (fuel fabrication and separation factories, power stations, etc.) based on hypotheses for developing factories and the installed nuclear capacity.

In the context of her PhD work, supervised by Nicolas Thiollière, Fanny Courtin studied the objective of stabilizing the quantity of plutonium recycled in the reactors of nuclear plants by 2100. One of the constraints in the simulation was that all the power plant reactors needed to use the current pressurized water technology. Based on this criterion, the CLASS tool carried out thousands of simulations to identify possible strategies. “The condition for stabilizing the quantity of plutonium and minor actinides would be to have 40 to 50% of the pressurized water reactors dedicated to the multi-recycling of plutonium,” Nicolas Thiollière explains. “However, the availability of plutonium in these scenarios would also mean a regular decrease in the nuclear capacity, to a level between 0 to 40% of the current capacity.” This effect is caused by minor actinides, which are not recycled and therefore build up. The plants must therefore incinerate plutonium to stabilize the overall inventory. However, incinerating plutonium implies reducing the power plants’ capacity at an equivalent rate.

On these charts, each line represents a possible course of action. In purple, the researchers indicated the scenarios that would meet a mass stabilization condition for the plutonium and minor actinides in circulation (top). These scenarios imply reducing the thermal energy of the power plants over the course of the century (bottom).

 

The researchers also tested the condition of minimizing the inventory of plutonium and minor actinides. In addition to increasing the number of reactors used for multi-recycling, the researchers showed that the scenario for reducing the quantity of plutonium and minor actinides in the cycle would imply phasing out nuclear power in a few years. Reducing the stock of plutonium is tantamount to reducing the fuel inventory, which would mean no longer having enough to supply all the nuclear power plants. “Imagine you have 100 units of plutonium to supply 10 power plants. At the end of the cycle, you would only have 80 units remaining and would only be able to supply 8 plants. You would have to close 2. In recycling the 80 units, you would have even less in the output, etc.,” Nicolas Thiollière summarizes. In practice, it therefore seems impractical to implement major R&D efforts to recycle MOx without FNRs, considering that this solution implies abandoning the technology in the short term.

The industrial feasibility of these options must first be validated by extensive safety studies. However, in the initial stages, the scenarios involving the stabilization of plutonium and minor actinides seem compatible with diversifying France’s electricity mix and rolling out renewable energy to replace nuclear sources. Industrial feasibility studies looking at both safety issues and costs are all the more valuable in considering the uncertainty involved in deploying fast neutron reactors and the future of the nuclear sector. It is important to address the uncertainty of these economic and safety issues before deploying a strategy that would radically change France’s nuclear power plants.

Also read on I’MTech: What nuclear risk governance exists in France?

Article written for I’MTech (in French) by Anaïs Culot

gilets jaunes, yellow vests

Debate: Purchasing power and carbon tax, the obsolescence of political categories

Fabrice Flipo, Institut Mines-Télécom Business School

[divider style=”dotted” top=”20″ bottom=”20″]

[dropcap]W[/dropcap]hen it comes to social and ecological concerns, many dream of reconciling the two, but few have a solution. The “yellow vests” have just provided a reminder of this fact, as they do not identify with anything offered by organized groups, whether political, union-based or even community-oriented.

So what can be done? Are we destined for failure? The study of major political ideas can help provide a way forward.

With the “yellow vests,” the traditional questions about ways of life can be seen from an unexpected angle. The key demand is for purchasing power, but observations and surveys have also revealed concern about climate issues. What can be done? How can social and ecological issues be brought together?

“Being poor” in France and elsewhere

First let’s take a look at the key factors involved.

To begin with, let us remember that if we disregard redistribution through taxes, purchasing power is determined by two key elements: income and wealth. As for income, the richest 10% of the population in France earns around €300 billion, while the poorest 10% earn 10 times less. But this figure masks the slow salary growth for the majority of the population; those in management positions are the only ones who have seen significant increases in their income over time.

As far as wealth is concerned, there is even greater inequality: the poorest 50% of the population possesses 8% of all wealth while those who make up the richest 1% possess 17% of the wealth (and this figure rises to 50% when considering the wealthiest 10%). That explains the average wealth of €16,000 for unskilled workers. And this is only taking averages into consideration. Individual examples of success are even more striking.

Bernard Arnault earns €3.5 million a month for his work at LVMH; a salary comparable to that of football player Kylian Mbappé with his monthly salary of €2 million. This means Bernard Arnault earns the equivalent of a monthly minimum wage salary every four minutes. He also possesses a fortune of €73 billion which provides him with €300 million in dividends, which is the equivalent of 100 times his salary at LVMH. It could be argued that this is an extreme case, but it is a visible reality in France, where so many people are struggling just to make ends meet.

Furthermore, there is the problem of “necessities,” to use Marx’s expression, meaning items that are considered necessary to live. While inflation may be low, the basket price for items that are deemed essential is on the rise. The cost of digital technology, for example, has been added to this list.

The scale used by the French charity Secours Populaire Français illustrates this point: “being poor” in France corresponds to an ever-higher level of income; it is now defined as earning €1,118 a month, while the minimum net monthly salary is €1,150. Earning minimum wage in France now means earning only €32 more than what is considered the poverty line.

This threshold must be compared with that used to define belonging to the “global middle class” based on the same statistics as those used in France: this level is defined as between €4,000 and €6,000 per year, which works out to between €300 and €500 per month.

As economics Nobel Prize winner Amartya Sen has pointed out, poverty is a social construct and when it comes to this issue, the benchmarks are still largely national. Taken together, these two observations clearly conclude that the majority of French people are trapped between two blades of scissors – that which allows them to earn money and that which controls how they spend it.

1,000,000 tons of CO2 for Bernard Arnault

We will now try to express this information in terms of climate and energy.

Based on several studies, economist Jean Gadrey estimates, using “the means available,” that the richest 1% of the French population emit approximately 160 tons of carbon per person per year, compared to 4 tons for the poorest 10%. The poorest 10% of the population therefore emits 28 million tons, compared to 112 million tons for the richest 1% (based on a population of 70 million).

Based on this type of calculation, it can be concluded that Bernard Arnault emits 1,000,000 tons by himself (if one minimum wage salary is worth 4 tons of CO2).

This simple observation illustrates the futility of a carbon tax which would not, at minimum, be based on income. Either it would be a high tax and low-income households, meaning the majority French households, would not be liable for paying it, or it would be kept at a very low level and it would have no impact on the climate. A recent report by ADEME (the French Environment and Energy Management Agency) shows that French people are engaged in climate protection, but to take further action they ask that changes be shared in a fair, democratic way. These opinions have remained stable in the last two surveys.

Surprisingly, the figures presented here and those reported by Jean Gadrey using “the means available” have not been a focus of the debate or have not been presented clearly. The “yellow vests” plainly see that those being asked to make the biggest sacrifices are those who need their small quantity of CO2  the most. Admittedly, it is still too much for the planet …  but then what can be said about the others!

Strategies to be invented

So what can be done?

The current government has embraced a traditionally liberal argument: let the market sort itself out. In other words, the economy should be left alone, in order to “respond to demand” and help France prosper. Historically, this strategy has been partially successful since France possesses huge multinationals and remains one of the top-ranking world economies despite its small size, without being as financialized as London’s City. Yet there is a downside to this strategy: an ever-greater concentration of wealth.

Can people really improve their situation simply by “crossing the street” or by founding start-ups? Economist Thomas Piketty’s work has shown that this is clearly not the case: when there is little growth – meaning few possibilities to create wealth that could result in income – those with the largest fortunes benefit since they run the game.

A number of different solutions may be explored, from all sides of the political spectrum.

Those with the most liberal leanings will encourage the richest part of the population to decarbonize the economy, which is undoubtedly what Emmanuel Macron hoped to do by giving businesses and their owners more room for maneuver. This would require these stakeholders to be ready to take on this role and to a great enough extent. Whatever we may think about the credibility of such a scenario, the fact remains that it has not been supported by any evidence.

Another scenario would be for major players, like Bernard Arnault for one, to stop acting like rentiers, using and abusing their market position to keep new players out of the market. This would mean putting an end to the “laissez-faire” system, but it would not necessarily mean the end of a liberal system in the sense that some liberals – like the classic Natural Capitalism by Paul Hawken, Amory B. Lovins and L. Hunter Lovins – could also consider that those with giant fortunes pose a threat to freedom. This was one of the reasons behind United States antitrust laws, for example. Those who are more conservative (including the Rassemblement National) will certainly be averse to opposing major interests and will refuse to change the social order as it stands. A more socialist approach would seek to make use of the State, whether directly through public expenditure (Keynesian), by controlling public companies, or by increasing the minimum wage. And we must remember that the current government did not increase the minimum wage because it wants to use taxes, meaning the income of non-minimum wage employees, to increase in-work benefits, and not wages themselves.

Yet the State alone will not be able to change people’s daily lives: reorganizing territories and deploying renewable energy require much greater efforts – setting up networks, training people to work with the equipment, etc.

The traditional political positions appear to be poorly suited to respond to ecological issues. This can be seen by studying the political ideas and events that have occurred over the past decades. The “yellow vests” have successfully demonstrated this through their refusal of the existing divisions. So aren’t they the ones who could help us determine where to go from here? An alliance of progressives, across historic divides, is the most plausible path to take.

[divider style=”dotted” top=”20″ bottom=”20″]

Fabrice Flipo, Professor of social and political philosophy, epistemology and history of science and technology at Institut Mines-Télécom Business School

The original version of this article (in French) was published on the website of The Conversation France.

Also read on I’MTech

[one_half]

[/one_half][one_half_last]

[/one_half_last]

Imagination, imaginaire

Imagination: an architect and driving force of transitions

All technology starts with a vision, a tool created to meet one of society’s objectives. Its development path is formed by both human projections and the dynamics of the transformations it generates. It is therefore important to take the time to ask ourselves what we intended to do with digital technology and what we will do with it. We must also analyze the transformations this technology has already initiated in the digital transition and work to build the world of the tomorrow. In the book by Carine Dartiguepeyrou and Gilles Berhault entitled Un autre monde est possible – Lost in transitions?, Francis Jutand—Deputy President of IMT—raised the question of the role imagination plays in the current digital transition. He describes how important it is in defining our future. Upon the release of this book, I’MTech spoke with Francis Jutand to learn more.

 

Francis Jutand

How can we study a transformation as profound as the one generated by digital technology?

Francis Jutand: This is a true metamorphosis, the fourth to occur in the history of humanity. A metamorphosis is characterized by an initial transformation period that is extremely fast and powerful, which can be referred to as the transition period. We do not often have the opportunity to study the conditions of a metamorphosis before the transition occurs, except in the case of artists and creators who sense its approach or foresight experts who suspect its coming. The work of foresight experts takes place during this transition period, or better yet, this “prenatal” period. Their work is aimed at analyzing, understanding and sharing their findings to influence the path of development and, above all, to contribute to designing the world of tomorrow. Every transformation has causes. This means there are also early signs of its development and the implementation of structures that will make it possible. The printing press, encyclopedia and the scientific development the 17th and 18th centuries all paved the way for the industrial transformation. Electronics, telecommunications, computer science, and media paved the way for the digital transformation that took place as they converged in 2000.

Why use imagination to study the digital transition?

FJ: It is impossible to master the dynamics of the transformation that is underway: before we even have time to see what will come of one innovation, others have emerged. This results in a divergence and a tipping point, which is more Lamarckian than Darwinian in nature. We simply know that all activities and individuals will be reached and transformed in the process. We are actors, but at the same time we are also subject to the forces working to operate this change. The question that arises is, how can we anticipate and act now to design and influence the world of tomorrow? This world of tomorrow is shaped by the ideas at the origin of the transformation as well as those that emerge as it progresses. It is in the convergence of these ideas that imagination can act as an architect and builder of this new world. The last transformation was industrial, and researchers like Pierre Musso at Télécom Paris have carefully analyzed that transformation and the role it played in structuring the industrial society and in creating infrastructures of communication networks, services and content on which the metamorphosis was built. When a transformation begins, imagination changes. Digital imagination cannot be regarded as a mere extension of industrial imagination.

Why is the industrial imagination insufficient in explaining the digital metamorphosis underway?

FJ: The industrial imagination is above all based on processes and rational models. This is the mentality that takes a complex problem and cuts it into smaller pieces, clearly defines the steps that must be taken to resolve them and creates an assembly design to make it all work. This relies on methods of design, description, fragmentation, task automation, deployment and monitoring, which are structured around successive phases: analysis, modeling, simulation, decision-making, implementation, feedback and adaptation. It is extremely effective, yet this type of imagination and methods have reached their limits and are now being exhausted due to the new complexities of digital technology. This is primarily because this process is slow: it takes years to carry out a large project, design infrastructures and large-scale information systems. Industrial imagination was successful for large systems: nuclear energy, aeronautics, space, transportation systems and many types of networks… Yet it cannot withstand the complexities and acceleration of the digital world. This form of imagination is based on rationality and efficiency that attempts to cut corners when it comes to involving humans, who are seen as cost factors. It therefore promotes automation to achieve performance, to the detriment of development. This approach has now reached its limit in the current context of new social and environmental issues and the expectations of new generations seeking to develop their individuality, rather than integrate a system. Our society must establish a new projection that will allow us to solve new problems. This is already being done and will continue to develop as we pursue a new form of imagination.

What characterizes this new digital imagination?

FJ: We need to understand that this imagination thrives on the one hand on the development of science and digital technology and, on the other hand, on the development of the consumer society that created this individuation phenomenon by emphasizing the value of personality and desires that must be expressed and satisfied. These changes fueled the phenomenon of individuation, which was accelerated by consumer society and further matured through the development of networks. The individual has therefore taken on an increasingly important role. The individual no longer exists as part of a community or class, but as an autonomous entity capable of becoming personally involved in an activity and defining and adopting his own positions. In this sense it shares a common point with the hippy and hacker movements—in terms of hacker ethics, despite this culture often being mistakenly viewed as mere attackers. At different points in the development of the digital imagination, these two groups took a stand to demand that individuals be taken into consideration, not as belonging to a consumer class, but based on their individualities that can work alone and in cooperation with others. This all led to the creation of an imagination combining sharing, instant and global communication. In short: the creation of cooperative networks of individuals.

Today, the digital imagination offers a vision of the world in which individuals can act, experiment, share, cooperate and, in so doing, explore multiple answers aimed at providing solutions to problems. This is the opensource and start-up spirit, which relies on collective synchronization based on common goals and values. It is a sort of inversion of architectures, organizations and decision-making methods. It also marks a transition from an automation and efficiency economy limiting the human factor, to a culture based on effectiveness and performativity relying on cooperative, associative and parallel exploration. This vision of progress relies on personal and collective experience. Finally, it is the power of a multitude of individuals searching for solutions through discussion and decision-making processes.

Does this form of imagination completely replace the industrial form that preceded it?

FJ: One form of imagination does not replace the previous one, it enriches it and adds new dimensions. Digital imagination governs new spaces and alters the industrial imagination, just like the industrial imagination transformed that of agriculture and traditional trades, which were also altered, but not destroyed. It is clear, however, that we can expect these new areas of human development taking shape through digital imagination to play an increasingly significant role in society. This form of imagination can go a long way, since it affects cognitive functions and permeates collective narratives. Science fiction, as both an artistic and projective activity, contributes to bringing the digital imagination into being and is playing a leading role in exploring the magnitude of the possible utopian and dystopian outcomes of digital technology.

Many works of science fiction are more dystopian. How would you explain this pessimism in the digital imagination?

FJ: The inner workings of society have always been reflected through, on the one hand, problems related to power, domination and money and, on the other hand, a spiritual dimension. The hubris, or excessive nature caused by this first aspect is not specific to the digital transformation. It can, however, lead to forms of pride or even perversity that could influence developments in digital technology. We therefore see positions of domination using digital tools—overthrowing democracy, privacy—and transhumanism advocating messianic hubris such as immortality. There is also a less conspicuous but real phenomenon taking place as new sources progressively deepen existing inequalities, as rural areas are neglected due to a focus on urban issues and even in the threat of deterritorialization. These kinds of developments can cause us to lose perspective and want to bail out, leaving behind the collective interests of the human project. Today, one of the ways we can control this type of hubris is to prioritize ecological and global concerns and focus on social justice.

If digital imagination is not sufficient in finding solutions to the challenges facing society, should we expect a new form of imagination to emerge?

FJ: In my opinion, within the context of digital society, the digital transition will lay the foundations for a new transformation: that of cognition, which will be partially based on powerful artificial intelligence, once it is developed. This is a form of co-development, a symbiotic relationship between humans and machines and the capacity of intermediated collective individuation. And what forms of technology will make all this possible? We do not yet know. What is more, this coming transformation might not even be physical. Transitions also alter beliefs, approaches to spirituality, social structures, the nature of wealth… For now, we can only observe that a new form of imagination linked to cognition is beginning to develop. What remains to be seen is whether this will be a completely new form of imagination, or an extension of the digital imagination we have been building on for a few decades now.