Hydrogen: transport and storage difficulties

Does hydrogen hold the key to the great energy transition to come? France and other countries believe this to be the case, and have chosen to invest heavily in the sector. Such spending will be needed to solve the many issues raised by this energy carrier. One such issue is containers, since hydrogen tends to damage metallic materials. At Mines Saint-Étienne, Frédéric Christien and his teams are trying to answer these questions.

In early September, the French government announced a €7 billion plan to support the hydrogen sector through 2030. With this investment, France has joined a growing list of countries that are betting on this strategy: Japan, South Korea and the Netherlands, among others.

Nevertheless, harnessing this component poses major challenges across the supply chain. Researchers have long known that hydrogen can damage certain materials, starting with metals. “Over a century ago, scientists noticed that when metal is plunged into hydrochloric acid [from chlorine and hydrogen], not only is there a corrosive effect, but the material is embrittled,” explains Frédéric Christien, a researcher at Mines Saint-Étienne1. “This gave rise to numerous studies on the impact of hydrogen on materials. Today, there are standards for the use of metallic materials in the presence of hydrogen. However, new issues are constantly arising, since materials evolve on a regular basis.”

Recovering excess electricity produced but not consumed

For the last three years, the Mines Saint-Étienne researcher has been working on “power-to-gas” research. The goal of this new technology: recover excess electricity rather than losing it, by converting it to gaseous hydrogen through the process of water electrolysis.

Read more on I’MTech: What is hydrogen energy?

Power-to-gas technology involves injecting the resulting hydrogen into the natural gas grid, in a small proportion, so that it can be used as fuel,” explains Frédéric Christien. For individuals, this does not change anything: they may continue to use their gas equipment as usual. But when it comes to transporting gas, such a change has significant repercussions. Hence the question posed to specialists about the durability of materials: what impact may hydrogen have on the steel that makes up the majority of the natural gas transmission network?

Localized deformation

In collaboration with CEA Grenoble (Atomic Energy Commission), the Mines Saint-Étienne researchers have spent three years working on a sample of pipe in order to study the effect of the gas on the material. It is a kind of steel used in the natural gas grid.

The researchers observed a damage mechanism, through the “localization of plastic deformation.” In concrete terms, they stretched the sample so as to replicate the mechanical stress that occurs in the field, due in particular to changes in pressure and temperature. Typically, such an operation results in lengthening the material in a diffuse and homogeneous way, up to a certain point. Here, however, under the effect of hydrogen, all the deformation is concentrated in one place, gradually embrittling the material in the same area, until it cracks. Under normal circumstances, a native oxide layer of the material prevents the hydrogen from penetrating inside the structure. But under the action of mechanical stress, the gas takes advantage of the crack to cause localized damage to the structure.

But it must be kept in mind that these findings correspond to laboratory tests. “We’re a long way from industrial situations, which remain complex,” says Frédéric Christien. “It’s obviously not the same scale. And, depending on where it’s located, the steel is not always the same – some have lining while others don’t and it’s the same thing for heat treatments.” Additional studies will therefore be needed to better understand the effect of hydrogen on the entire natural gas transport system.

The production conundrum

Academic research thus provides insights into the effects of hydrogen on metals under certain conditions. But can it go so far as to create a material that is completely insensitive to these effects? “At this point, finding such a dream material seems unrealistic,” says the Mines Saint-Étienne researcher. “But by tinkering with the microstructures or surface treatments, we can hope to significantly increase the durability of the metals used.”

While the hydrogen sector has big ambitions, it must first resolve a number of issues. Transport and storage safety is one such example, along with ongoing issues with optimizing production processes to make them more competitive. Without a robust and safe network, it will be difficult for hydrogen to emerge as the energy carrier of the future it hopes to be.

By Bastien Contreras.

Frédéric Christien is a researcher at the Georges Friedel Laboratory, a joint research unit between CNRS/Mines Saint-Étienne

Fuel cells in the hydrogen age

Hydrogen-powered fuel cells are recognized as a clean technology because they do not emit carbon dioxide. As part of the energy transition aimed at transforming our modes of energy consumption and production, the fuel cell therefore has a role to play. This article will look at the technologies, applications and perspectives with Christian Beauger, a researcher specialized in material sciences at Mines ParisTech

How do fuel cells work?

Christian Beauger: Fuel cells produce electricity and heat from hydrogen and oxygen reacting at the heart of the system in electrochemical cells. At the anode, hydrogen is oxidized to protons and electrons —the source of the electric current— while at the cathode, oxygen reacts with the protons and electrons to form water. These two electrodes are separated by an electrolyte that is gas-tight and electronically insulated. As an ion conductor, it transfers the protons from the anode to the cathode. To build a fuel cell, several cells must be assembled (a stack). The nominal voltage depends on their number. Their size determines the maximum value of the current produced. The dimensions of the stack (size and number of cells) depend on what the device is to be used for.

Within the stack, the cells are separated by bipolar plates. Their role is to supply each cell with gas, to conduct electrons from one electrode to the other and to cool the system. A fuel cell produces as much electricity as heat, and the temperature of use is limited by the materials used, which vary according to the type of cell.

Finally, the system must be supplied with gas.  Since hydrogen does not exist naturally, it must be produced and stored in pressurized tanks. The oxygen used comes from the air, supplied to the fuel cell by means of a compressor.

What is the difference between a fuel cell and a battery?

CB: The main difference between the two is in their design. A battery is an all-in-one technology whose size depends both on the power and the desired autonomy (stored energy). On the contrary, in a fuel cell, the power and energy aspects are separated. The available energy depends on the amount of hydrogen on board, often stored in a tank. So a fuel cell has very varying levels of autonomy depending on the size of the tanks. The power is linked to the size of the stack. Recharging times are also very different. A hydrogen-powered vehicle can be refueled in minutes, compared to a battery that usually takes several hours to charge.

What are the different types of fuel cells?

CB: There are five major types that differ according to the nature of their electrolytes. Alkaline fuel cells use a liquid electrolyte and have an operating temperature of around 70°C. With my team, we are working on low-temperature fuel cells (80°C) whose electrolyte is a polymer membrane; these are called PEMFCs. PAFCs use phosphoric acid and operate between 150°C and 200°C. MCFCs have an electrolyte based on molten carbonates (600-700°C). Finally, those with the highest temperature, up to 1000 °C, use a solid oxide (SOFC), i.e. a ceramic electrolyte.

Their operating principle is the same, but they do not present the same problems. Temperature influences the choice of materials for each technology, but also the context in which they are used. For example, SOFCs take a long time to reach their operating temperature and therefore do not perform optimally at start-up. If an application requires a fast response, low-temperature technologies should be preferred. Overall, PEMFCs are the most developed.

What are the technical challenges facing fuel cell research?

CB: The objective is always to improve performance, i.e. conversion efficiency and lifespan, while reducing costs.

For the PEMFCs we are working on, the amount of platinum required for redox reactions should be reduced. Limiting the degradation of the catalyst support is another challenge. For this purpose, we are developing new supports based on carbon aerogels or doped metal oxides, with better corrosion resistance under operating conditions. They also provide a better supply of gas (hydrogen and especially air) to the electrodes. We have also recently initiated research on platinum-free catalysts in order to completely move away from this expensive material.

Another challenge is cooling. One option to make more efficient use of the heat produced or to reduce cooling constraints in the mobility sector is to be able to increase the operating temperature of PEMFCs. The Achilles heel here is the membrane. With this in mind, we are working on the development of new composite membranes.

The path for SOFCs is reversed. With a much higher operating temperature, there are fewer kinetic losses at the electrodes and therefore no need for expensive catalysts. On the other hand, the heavy constraints of thermomechanical compatibility limit the choice of SOFC constituent materials. The objective of the research is therefore to lower the operating temperature of SOFCs.

Where do we find these fuel cells today?

CB: PEMFCs are the most widespread and marketed, primarily in the mobility sector. The fuel cell vehicles offered by Hyundai or Toyota, for example, carry a fuel cell of about 120 kW. Electricity is generated on board by the fuel cell hybrid to a battery. The battery preserves the fuel cell during strong accelerations. Indeed, although it is capable of rapidly supplying the required energy, this driving phase accelerates the degradation of the core materials. Fuel cells can also be used as range extenders as originally developed by SYMBIO for Renault electric vehicles. In this case, hydrogen takes over when the battery weakens. The fuel cell can then recharge the battery or power the electric motor.

Another example of commercialization is micro-cogeneration, which makes it possible to use the electricity and heat produced by the fuel cell. In Japan, the Ene Farm program, launched in 2009, has enabled tens of thousands of residential cogeneration systems to be marketed, built using PEMFC or SOFC stacks with a power output of around 700 W.

You mentioned the deterioration of materials and the preservation of fuel cells in use: what about their lifespan?

CB: Lifespan is mainly impacted by the stability of the materials, especially those found in the electrodes or that make up the membrane. The highly oxidizing environment of the cathode can lead to the degradation of the electrodes and indirectly of the membranes. The carbon base of PEMFC electrodes has a particular tendency to oxidize at the cathode. The platinum on the surface can then come away, agglomerate, or migrate towards the membrane to the point of degrading it. Ultimately, the target for vehicles is 5,000 hours of operation and 50,000 hours for stationary applications. We must be at two-thirds of that goal now.

Read more on I’MTech Hydrogen: transport and storage difficulties

What are the prospects for fuel cells now that hydrogen is receiving investment support?

CB: Applications for mobility are still at the heart of the issue. Interest is shifting towards heavy vehicles (buses, trains, light aircraft, ships) for which batteries are insufficient. Alstom’s iLint hydrogen train is being tested in Germany. The aeronautics sector is also conducting tests on small aircraft, but hydrogen-powered wide-body aircraft are not for the immediate future. PEMFCs have the advantage of offering a wide range of power to meet the needs of the various uses from mobile applications (computer, telephone, etc.) to industry usage.

Finally, it is difficult to talk about fuel cells without talking about hydrogen production. It is also often talked about as a means of storing renewable energy. To do this, the reverse process to that used in the fuel cell is required: electrolysis. The water is dissociated into hydrogen and oxygen by applying a voltage between the two electrodes.

Overall, it should be remembered that fuel cell deployment only makes sense if the method of hydrogen production has a low carbon footprint. This is one of the major challenges facing the industry today.

Interview by Anaïs Culot.

For more information about fuel cells:

high temperature fuel cell

Turning exhaust gases into electricity, an innovative prototype

Jean-Paul Viricelle, a researcher in process engineering at Mines Saint-Étienne, has created a small high-temperature fuel cell composed of a single chamber. Placed at the exhaust outlet of the combustion process, this prototype could be used to convert unburned gas into energy.

Following the government’s recent announcements about the hydrogen industry, fuel cells are in the spotlight in the energy sector. Their promise is that they could help decarbonize industry and transportation. While hydrogen fuel cells are the stars of the moment, research on technologies of the future is also playing an important role. At Mines Saint-Étienne[1], Jean-Paul Viricelle has developed a new high-temperature fuel cell – over 600°C – called a mono-chamber. It is unique in that it can be fueled not only by hydrogen, but by a mixture of more complex gases, representative of the real mixtures at the exhaust outlet of a combustion process. “The idea is not to compete with a conventional hydrogen fuel cell, since we’ll never reach the same yield, but to recover energy from the mixtures of unburned gas at the outlet of any combustion process,” explains the researcher, who specializes in developing chemical sensors.

This would help reduce the amount of gaseous waste resulting from combustion. These compounds also contribute to air pollution. For example, unburned hydrocarbons could be recovered, cleaned and oxidized to generate electricity. Why hydrocarbons? Because they are composed of carbon and hydrogen atoms, the fuel of choice for conventional fuel cells. One of the most advanced studies on the concept of mono-chamber cells was published in 2007 by Japanese researchers who recovered a gaseous mixture at the exhaust outlet of a scooter engine. Even though it was not very powerful, the experiment proved the feasibility of such a system. Jean-Paul Viricelle has created a prototype that seeks to improve this concept. It uses a synthetic gaseous mixture, which is closer to the real composition of exhaust gases. It also optimizes the fuel cell’s architecture and materials to enhance its performance.

The inner workings of the cell

A fuel cell consists of three components: two electrodes, which are hermetically separated by an electrolyte. It is fueled by a gas (hydrogen) and air. Once inside the cell, an electrochemical reaction occurs at each electrode. This results in the exchange of electrons, which generates the electricity supplied by the cell. The architecture of conventional cells is often constrained, which prevents them from being reduced in size. To overcome these obstacles, Jean-Paul Viricelle has opted for a mono-chamber cell, composed of a single compartment. In this concept, hydrogen cannot be directly used as a fuel since it is too reactive when it comes into contact with air and could blow up the device! That’s why the researcher fuels it with a gaseous mixture of hydrocarbons and air. What does this new structure change compared to a conventional cell? “The electrolyte no longer acts like a seal, as it does in a conventional cell, and serves only as an ionic conductor. But the cathode, and the anode come into contact with all the reactants. So they must be perfectly selective so that they only react with one of the gases,” explains Jean-Paul Viricelle.

In practice, a synthetic exhaust gas is sent over the cell. The electrochemical reaction that follows is standard: an oxidation at the anode and a reduction at the cathode generate electricity. This cell works at a high temperature (over 600°C), a condition that is essential for the electrolyte to transport the electrons. And, since the gas mixture that fuels the cell gives off heat, a small enough cell could be self-sustaining in terms of heat. This means that once it has been initiated by an eternal heat source, it would become self-sufficient in terms of heat. Moreover, laboratory tests have shown a power density equivalent to that of conventional cells. However, significant gas flows must be sent over this demonstrator that measures 4 cm² with a low energy conversion rate as a result. Stacking mono-chamber cells, rather than a single cell, as is the case for this prototype, could help resolve this problem.

A wide range of applications

For now, markets are more conducive to low-temperature fuel cells in order to prevent the devices from overheating. Nevertheless, the concept developed by Jean-Paul Viricelle presents a number of benefits. It opens the door to new geometries for placing two electrodes on the same surface. Such design flexibility facilitates a move towards miniaturization. Small high-temperature fuel cells could, for example, fuel industrial microreactors. The cells could also be integrated at the exhaust outlet of an engine to convert unburned hydrocarbons into electricity, as in the Japanese experiment. In this case, the energy recovered would power electronic devices and other sensors within the vehicle. More broadly, this energy conversion device could help respond to efficiency issues for any combustion system, including power plants. Despite all this, mono-chamber fuel cells remain concepts that have not made their way out of research laboratories. Why is this? Up to now, there has been a greater focus on hydrogen production than on energy recovery.

By Anaïs Culot.

[1] Jean-Paul Viricelle is the director of the Georges Friedel Laboratory, a joint research unit between Mines Saint-Étienne and CNRS.

Europes green deal

Digital technology, the gap in Europe’s Green Deal

Fabrice Flipo, Institut Mines-Télécom Business School

[divider style=”normal” top=”20″ bottom=”20″]

[dropcap]D[/dropcap]espite the Paris Agreement, greenhouse gas emissions are currently at their highest. Further action must be taken in order to stay under the 1.5°C threshold of global warming. But thanks to the recent European Green Deal aimed at reaching carbon neutrality within 30 years, Europe now seems to be taking on its responsibilities and setting itself high goals to tackle contemporary and future environmental challenges.

The aim is to become a society which is “fair, prosperous and a modern, resource-efficient and competitive economy”. This should make the European Union a global leader in the field of the “green economy”, with citizens being placed at the heart of a “sustainable and inclusive growth”.

The deal’s promise

How can such a feat be achieved?

The green deal is set within a long-term political framework for energy efficiency, waste, eco-conception, circular economy, public purchase and consumer education. Thanks to these objectives, the UE  aims to reach the long-awaited decoupling:

“A direct consequence of the regulations put in place between 1990 and 2016 is that energy consumption has decreased by almost 2% and greenhouse gas emissions by 22%, while GDP has increased by 54% […]. The percentage of renewable energy has gone from representing 9% of total energy consumption in 2005 to 17% today.”

With the Green Deal the aim is to continue this effort via ever-increasing renewable energies, energy efficiency and green products. The sectors of textiles, building and electronics are now the center of attention as part of a circular economy framework, with a strong focus on repair and reuse, driven by incentives for businesses and consumers.

Within this framework, energy efficiency measures should reduce our energy consumption by half, with the focus on energy labels and the savings they have made possible.

According to the Green Deal, the increased use of renewable energy sources should enable us to bring the share of fossil fuels down to just 20%. The use of electricity will be encouraged as an energy carrier, and 80% of it should be renewable by 2050. Energy consumption should be cut by 28% from its current levels. Hydrogen, carbon storage and varied processes for the chemical conversion of electricity into combustible materials will be used additionally, enabling an increase in the capacity and flexibility of storage.

In this new order, a number of roles have been identified: on one side, the producers of clean products, and on the other, the citizens who will buy them. In addition to this mobilization of producers and of consumers, national budgets, European funding and “green” (private) finance will commit to the cause; the framework of this commitment is expected to be put in place by the end of 2020.

Efficiency, renewable energy, a sharp decrease in energy consumption, promises of new jobs: if we remember that back in the 1970s, EDF was simply planning on building 200 nuclear power plants by the year 2000 – following a mindset which associated consumption and progress – everything now suggests that supporters of the Negawatt scenario (NGOs, ecologists, networks of committed local authorities, businesses and workers) have won a battle which is historic, cultural (in terms of values and realization of what is at stake) and political (backed by official texts).

The trajectory of GHG in a 1.5°C global warming scenario.

 

According to the deal, savings made on fossil fuels could reach between €150 billion and €200 billion per year, to which would be added the amount of health costs that will be avoided, amounting to €200 billion a year and the prospect of exporting “green” products.. Finally, millions of jobs may be created, with retraining mechanisms for the sectors that are the most impacted, and support for low-income households.

Putting the deal to the test

A final victory? On paper, everything points that way.

However, it is not as simple at it seems, and the UE itself recognizes that improvements in the field of energy efficiency and the decrease in glasshouse gas emissions are currently stalling..

This is due to the following factors, in order of importance: economic growth; the decrease in energy efficiency savings, especially in the airline industry; the sharp increase in the number of SUVs; and finally, the upward adjustment of real vehicle emissions, following the “diesel gate” scandal (+30 %).

More seriously, the EU’s net emissions, which include those generated by imports and exports, have risen by 8% during the 1990-2010 period.

Efficiency therefore has its limits and savings are more likely to be made at the start than at the end.

The digital technology challenge

According to the Green Deal, ‘Digital technologies are a critical enabler for attaining the sustainability goals of the Green deal in many different sectors”: 5G, CCTV, Internet of things, cloud computing or AI. We have our doubts, however, as to whether that is true.

Several studies, including by the Shift Project, show that emissions from the digital sector have doubled between 2010 and 2020. They are now higher than those produced by the much-criticized civil aviation sector. The digital applications put forward by the European Green Deal are some of the most energy consuming, according to several case scenarios.

Can the increase in usage be offset by energy efficiency? The sector has seen tremendous progress, on a scale not seen in any other field. The first computer, the ENIAC, weighed 30 tons, consumed 150,000 watts and could not do more than 5,000 operations per second. A modern PC consumes 200 to 300 W, for the same available power as a supercomputer of the early 2000s which consumed 1.5 MW! Progress knows no bounds…

However, the absolute limit (the “Landauer limit”) was identified in 1961 and confirmed in 2012. According to the semiconductor industry itself, the limit is fast approaching in terms of the timeframe for the Green Deal, at a time when traffic and calculation power are increasing exponentially. Is it therefore reasonable to continue becoming increasingly dependent on digital technologies, in the hope that efficiency curves might reveal energy consumption “laws”?

Especially when we consider that the gains obtained in terms of energy efficiency have little to do with any shift towards more ecology-oriented lifestyles: the motivations have been cost, heat removal and the need to make sure our digital devices could be mobile so as to keep our attention at all times.

These limitations on efficiency explain the increased interest in more sparing use of digital technologies. The Conseil National du Numérique presented its roadmap shortly after Germany. However, the Green Deal is stubbornly following the same path: a path which consists in relying on an imaginary digital sector which has little in common with the realities of the sector.

Digital technologies, facilitating growth

Drawing from a recent article, the Shift Project sends a warning: “Up until now, rebound effects have tuned out to exceed the gains brought by technological innovation.” This conclusion has once more been recently confirmed.

For example, the environmental benefits of distance working have in fact been much smaller than those we were expecting intuitively, especially when not combined with other changes in the social ecosystem. Another example is that in its 2019 “current” scenario, the OECD predicted a threefold increase in passenger transport between 2015 and 2050, facilitated (and not impeded) by autonomous vehicles.

Digital technologies are a growth factor first and foremost, as Pascal Lamy, then Head of the WTO, said when he stated that globalization is based on two innovations: Internet and the container. An increase in digital technologies will lead to more emissions. And if this is not the case, it will be because of a change in how we approach ecology, including digital technologies.

We are justified in asking the question of what it is the Green Deal is really trying to protect: the climate or the digital markets for big corporations?

[divider style=”dotted” top=”20″ bottom=”20″]

Fabrice Flipo, Professor of social and political philosophy, epistemology and history of science and technology  at Institut Mines-Télécom Business School

This article is republished from The Conversation under the Creative Commons license. Read the original article (in French) here.

Datafarm

Datafarm: low-carbon energy for data centers

The start-up Datafarm proposes an energy solution for low-carbon digital technology. Within a circular economy system, it powers data centers with energy produced through methanization, by installing them directly on cattle farms.

 

When you hear about clean energy, cow dung probably isn’t the first thing that comes to mind. But think again! The start-up Datafarm, incubated at IMT Starter, has placed its bets on setting up facilities on farms to power its data centers through methanization. This process generates energy from the breaking down of animal or plant biomass by microorganisms under controlled conditions. Its main advantages are that it makes it possible to recover waste and lower greenhouse emissions by offering an alternative to fossil fuels. The result is a green energy in the form of biogas.

Waste as a source of energy

Datafarm’s IT infrastructures are installed on cattle farms that carry out methanization. About a hundred cows can fuel a 500kW biogas plant, which is the equivalent of 30 tons of waste per day (cow dung, waste from milk, plants etc.). This technique generates a gas, methane, of which 40% is converted into electricity by turbines and 60% into heat. Going beyond the state of the art, Datafarm has developed a process to convert the energy produced through methanization…into cold!  This helps respond to the problem of cooling data centers. “Our system allows us to reduce the proportion of electricity needed to cool infrastructures to 8%, whereas 20 to 50% is usually required,” explains Stéphane Petibon, the founder of the start-up.

The heat output produced by the data centers is then recovered in an on-site heating system. This allows farmers to dry hay to feed their livestock or produce cheese. Lastly, farms no longer need fertilizer from outside sources since the residue from the methanization process can be used to fertilize the fields. Datafarm therefore operates within a circular economy and self-sufficient energy system for the farm and the data center.

A service to help companies reduce carbon emissions

A mid-sized biogas plant (500 kW) fueling the start-up’s data centers reduces CO2 emissions by 12,000 tons a year – the equivalent of the annual emissions of 1,000 French people. “Our goal is to offer a service for low-carbon, or even negative-carbon, data centers and to therefore offset the  greenhouse gas emissions of the customers who host their data with us,” says Stéphane Petibon.

Every four years, companies with over 500 employees (approximately 2,400 in France) are required to publish their carbon footprint, which is used to assess their CO2 emissions as part of the national environmental strategy to reduce the impact of companies. The question, therefore, is no longer whether they need to reduce their carbon footprint, but how to do so. As such, the start-up provides an ecological and environmental argument for companies who need to decarbonize their operations.  “Our solution makes it possible to reduce carbon dioxide emissions by 20 to 30 % through an IT service for which companies’ needs grow every year,” says  Stéphane Petibon.

The services offered by Datafarm range from data storage to processing.  In order to respond to a majority of the customers’ demand for server colocation, the start-up has designed its infrastructures as ready-to-use modules inserted into containers hosted at farms. An agile approach that allows them to build their infrastructures based on customers’ needs and prior to installation. The data is backed up at another center powered by green energy near Amsterdam (Netherlands).

Innovations on the horizon

The two main selection criteria for farms are the power of their methanization and their proximity to a fiber network . “The French regions have already installed fiber networks in a significant portion of territories, but these networks have been neglected and are inoperative. To activate them, we’re working with the telecom operators who cover France,” explains Stéphane Petibon. The first two infrastructures, in Arzal in Brittany and in Saint-Omer in the Nord department, meet all the criteria and will be put into use in September and December 2020 respectively. The start-up plans to host up to 80 customers per infrastructure and plans to have installed seven infrastructures throughout France by the end of 2021.

To achieve this goal, the start-up is conducting research and development on network redundancy  issues to ensure service continuity in the event of a failure. It is also working on developing an energy storage technique that is more environmentally-friendly than the batteries used by the data centers.  The methanization reaction can also generate hydrogen, which the start-up plans to store to be used as a backup power supply for its infrastructures. In addition to the small units, Datafarm is working with a cooperative of five farmers to design an infrastructure that will have a much larger hosting and surface capacity than its current products.

Anaïs Culot.

[box type=”info” align=”” class=”” width=””]This article was published as part of Fondation Mines-Télécom’s 2020 brochure series dedicated to sustainable digital technology and the impact of digital technology on the environment. Through a brochure, conference-debates, and events to promote science in conjunction with IMT, this series explores the uncertainties and challenges of the digital and environmental transitions.[/box]

energy consumption

The worrying trajectory of energy consumption by digital technology

Fabrice Flipo, Institut Mines-Télécom Business School

[divider style=”normal” top=”20″ bottom=”20″]

[dropcap]I[/dropcap]n November, the General Council for the Economy, Industry, Energy and Technology (CGEIET) published a report on the energy consumption of digital technology in France. The study draws up an inventory of equipment and lists consumption, making an estimate of the total volume.

The results are rather reassuring, at first glance. Compared to 2008, this new document notes that digital consumption seems to have stabilized in France, as have employment and added value in the sector.

The massive transformations currently in progress (growth in video uses, “digitization of the economy”, increased platform use, etc.) do not seem to be having an impact on the amount of energy expended. This observation can be explained by improvements in energy efficiency and the fact that the increase in consumption by smartphones and data centers has been offset by the decline in televisions and PCs. However, these optimistic conclusions warrant further consideration.

61 million smartphones in France

First, here are some figures given by the report to help understand the extent of the digital equipment in France. The country has 61 million smartphones in use, 64 million computers, 42 television sets, 6 million tablets, 30 million routers. Although these numbers are high, the authors of the report believe they have greatly underestimated the volume of professional equipment.

The report predicts that in coming years, the number of smartphones (especially among the elderly) will grow, the number of PCs will decline, the number of tablets will stabilize and screen time will reach saturation (currently at 41 hours per week).

Nevertheless, the report suggests that we should remain attentive, particularly with regard to new uses: 4K then 8K for video, cloud games via 5G, connected or driverless cars, the growing numbers of data centers in France, data storage, and more. A 10% increase in 4K video in 2030 alone would produce a 10% increase in the overall volume of electricity used by digital technology.

We believe that these reassuring conclusions must be tempered, to say the least, for three main reasons.

Energy efficiency is not forever

The first is energy efficiency. In 2001, the famous energy specialist Jonathan Koomey established that computer processing power per joule doubles every 1.57 years.

But Koomey’s “law” is the result of observations over only a few decades: an eternity, on the scale of marketing. However, the basic principle of digital technology has remained the same since the invention of the transistor (1947): using the movement of electrons to mechanize information processing. The main cause of the reduction in consumption is miniaturization.

Yet, there is a minimum threshold of physical energy consumption required to move an electron, which is known as “Landauer’s principle”. In technological terms, we can only get close to this minimum.  This means that energy efficiency gains will slow down and then stop. The closer technology comes to this minimum, the more difficult progress will be. In some ways, this brings us back to the law of diminishing returns established by Ricardo two centuries ago, on the productivity of land.

The only way to overcome the barrier would be to change the technological paradigm: to deploy the quantum computer on a large scale, as its computing power is independent of its energy consumption. But this would represent a massive leap and would take decades, if it were to happen at all.

Exponential data growth

The second reason why the report’s findings should be put into perspective is the growth in traffic and computing power required.

According to the American IT company Cisco, traffic is currently increasing tenfold every 10 years. If we follow this “law”, it will be multiplied by 1,000 within 30 years. Such data rates are currently impossible: the 4G copper infrastructure cannot handle them. 5G and fiber optics would make such a development possible, hence the current debates.

Watching a video on a smartphone requires digital machines – phones and data centers – to execute instructions to activate the pixels on the screen, generating and changing the image. The uses of digital technology thus generate computing power, that is, a number of instructions executed by the machines. The computing power required has no obvious connection with traffic. A simple SMS can just as easily trigger a few pixels on an old Nokia or a supercomputer, although of course the power consumption will not be the same.

In a document dating back to a few years ago, the semiconductor industry developed another “law”: the steady growth in the amount of computing power required on a global scale. The study showed that at this rate, by 2040 digital technology will require the total amount of energy produced worldwide in 2010.

This result applies to systems with the average performance profile of 2015, when the document was written. The study also considers the hypothesis of global equipment with an energy efficiency that is 1,000 times higher. Maturity would only be shifted by 10 years: 2050. If the entire range of equipment reached the limit of “Landauer’s principle”, which is impossible, then by 2070 all of the world’s energy (from 2010) would be consumed by digital technology.

Digitization without limits

The report does not say that energy-intensive uses are limited to the practices of a few isolated consumers. They also involve colossal industrial investments, justified by the desire to use the incredible “immaterial” virtues of digital technology.

All sectors are passionate about AI. The future of cars seems destined to turn towards autonomous vehicles. Microsoft is predicting a market of 7 billion online players. E-sport is growing. Industry 4.0 and the Internet of Things (IoT) are presented as irreversible developments. Big data is the oil of tomorrow, etc.

Now, let us look at some figures. Strubell, Ganesh & McCallum have shown, using a common neural network used to process natural language, that training consumed 350 tons of CO₂, equivalent to 300 round trips from New York to San Francisco and back. In 2016, Intel announced that the autonomous car would consume 4 petabytes per day. Knowing that in 2020 a person generates or transits 2 GB/day, this represents 2 million times more. The figure announced in 2020 is instead 1 to 2 TB/hour, which is 5,000 times more than individual traffic.

A surveillance camera records 8 to 15 frames per second. If the image is 4 MB, this means 60 MB/s, without compression, or 200 GB/hour, which is not insignificant in the digital energy ecosystem. The IEA EDNA report highlights this risk. The “volumetric video”, based on 5K cameras, generates a flow of 1 TB every 10 seconds. Intel believes that this format is “the future of Hollywood”!

In California, online gambling already consumes more than the power required for electric water heaters, washing machines, dishwashers, clothes dryers and electric stoves.

Rising emissions in all sectors

All this for what, exactly? This brings us to the third point. How does digital technology contribute to sustainable development? To reducing greenhouse gas emissions? To saving soils, biodiversity, etc.?

The Smart 2020 report promised a 20% reduction in greenhouse gases in 2008, thanks to digital technology. In 2020 we see that this has not happened. The ICT sector is responsible for 3% of global greenhouse gas emissions, which is more or less what the report predicted. But for the other sectors, nothing has happened: while digital technology has spread widely, emissions are also on the rise.

However, the techniques put forward have spread: “intelligent” engines have become more popular, the logistics sector relies heavily on digital technology, and soon artificial intelligence, not to mention the widespread use of videoconferencing, e-commerce and orientation software in transport. Energy networks are controlled electronically. But the reductions have not happened. On the contrary, no “decoupling” of emissions from economic growth is in sight, neither from a greenhouse gas perspective nor in other parameters, such as consumption of materials. The OECD predicts that material consumption will almost triple by 2060.

Rebound effect

The culprit, says the Smart 2020 report, is the “rebound effect”. This is based on the “Jevons paradox” (1865), which states that any progress in energy efficiency results in increased consumption.

A curious paradox. The different forms of “rebound effect” (systemic, etc.) are reminiscent of something we already know: they can take productivity gains, as found for example by Schumpeter or even Adam Smith (1776).

A little-known article also shows that in the context of neoclassical analysis, which assumes that agents seek to maximize their gains, the paradox becomes a rule, according to which any efficiency gain that is coupled with an economic gain always translates into growth in consumption. Yet, the efficiency gains mentioned so far (“Koomey’s law”, etc.) generally have this property.

A report from General Electric illustrates the difficulty very well. The company is pleased that the use of smart grids allows it to reduce CO2 emissions and save money; the reduction in greenhouse gas is therefore profitable. But what is the company going to do with those savings? It makes no mention of this. Will it reinvest in consuming more? Or will it focus on other priorities? There is no indication. The document shows that the general priorities of the company remain unchanged, it is still a question of “satisfying needs” which are obviously going to increase.

Digital technology threatens the planet and its inhabitants

Deploying 5G without questioning and regulating its uses will therefore open the way to all these harmful applications. The digital economy may be catastrophic for climate and biodiversity, instead of saving them. Are we going to witness the largest and most closely-monitored collapse of all time? Elon Musk talks about taking refuge on Mars, and the richest people are buying well-defended properties in areas that will be the least affected by the global disaster. Because global warming threatens agriculture, the choice will have to be made between eating and surfing. Those who have captured value with digital networks are tempted to use them to escape their responsibilities.

What should be done about this? Without doubt, exactly the opposite of what the industry is planning: banning 8K or, failing that, discouraging its use, reserving AI for restricted uses with a strong social or environmental utility, limiting the drastic level of power required by e-sport, not deploying 5G on a large scale, ensuring a restricted and resilient digital infrastructure with universal access that allows low-tech uses and low consumption of computing and bandwidth to be maintained. Favoring mechanical systems or providing for disengageable digital technology, not rendering “backup techniques” inoperable. Becoming aware of what is at stake. Waking up.

[divider style=”dotted” top=”20″ bottom=”20″]

This article benefited from discussions with Hugues Ferreboeuf, coordinator of the digital component of the Shift Project

Fabrice Flipo, Professor of social and political philosophy, epistemology and history of science and technology at Institut Mines-Télécom Business School

This article was republished from The Conversation under the Creative Commons license. Read the original article here.

[box type=”info” align=”” class=”” width=””]

This article has been published in the framework of the 2020 watch cycle of the Fondation Mines-Télécom, dedicated to sustainable digital technology and the impact of digital technology on the environment. Through a monitoring notebook, conferences and debates, and science promotion in coordination with IMT, this cycle examines the uncertainties and issues that weigh on the digital and environmental transitions.

Find out more on the Fondation Mines-Télécom website

[/box]

ASTRID project

Astrid: a nuclear project goes up in smoke

The abandonment of the Astrid project marks a turning point for France’s nuclear industry. The planned nuclear reactor was supposed to be “safer, more efficient and more sustainable”, but therefore required significant funding. Stéphanie Tillement, a researcher at IMT Atlantique, has studied how Fukushima impacted the nuclear industry. Her work has focused in particular on the rationale for abandoning the Astrid project, taking into account the complicated history of nuclear energy and how it has evolved in the public and political spheres.

 

Since the early days of nuclear energy, France has positioned itself as a global leader in terms of both research and energy production. In this respect, the abandonment of the Astrid project in August 2019 marked a move away from this leading position. Astrid (Advanced Sodium Technological Reactor for Industrial Demonstration) was supposed to be France’s first industrial demonstrator for what are referred to as “4th-generation” reactors. The selected technology was the sodium-cooled fast neutron reactor (FNR). At present, nuclear power in France is supplied by 58 second-generation pressurized water reactors, which operate with “slowed-down” neutrons. As an FNR, ASTRID held the promise of more renewable energy – it was supposed to be able to use depleted uranium and plutonium resulting from the operation of current plants as a fuel source, meaning it would consume much less natural uranium.

As part of the AGORAS research project, IMT Atlantique researcher Stéphanie Tillement, studied the impact of the Fukushima accident on the world of nuclear energy. This led her to study the Astrid project, and in particular the many challenges it encountered. “We ruled out the link with Fukushima early on,” says the researcher. The problems Astrid ran into are not related to a paradigm shift as a result of the catastrophe. The reasons it was abandoned are endogenous to the industry and its history.” And financial reasons, though by no means negligible, are not enough to explain why the project was abandoned.

A tumultuous history

In the 2000s, the United States Department of Energy launched the Generation IV International Forum to develop international cooperation for new concepts for nuclear reactors. Out of the six concepts selected by this forum as the most promising, France focused on sodium-cooled reactors, a project which would be launched in 2010 under the name Astrid. The country preferred this concept in particular due to the fact that three French reactors using the technology had already been  built. However, none of them had been used on an industrial scale and the technology had not advanced beyond the prototyping stage. The first such reactor, Rapsodie, was dedicated purely to research. The second was Phénix. It was an intermediary step – it had to produce energy but remained an experimental reactor, far from an industrial scale. It was the third such reactor, Superphénix, which would be given the role of representing the first in the series of this new French industrial-scale energy. But from the beginning, it experienced shut-down periods following several incidents and in 1997, Prime Minister Lionel Jospin announced that it would be shut down once and for all.

 “This decision was widely criticized by the nuclear industry,” says Stéphanie Tillement, “who accused him of acting for the wrong reasons.” During the election campaign, Lionel Jospin had aligned himself with the Green party, who were openly in favor of decommissioning the power plant. “Its sudden shutdown would be taken very badly and destroy all hope for the use of such technology on an industrial-scale. Superphénix was supposed to be the first in a long line, and some remember it as ‘a cathedral in a desert.'” This also reflected public opinion on nuclear energy: the industry was facing growing mistrust and opposition.

“For a lot of stakeholders in the nuclear industry, in particular the CEA (The French Atomic and Alternative Energy Commission), Astrid gave hope to the idea of reviving this highly promising technology,” explains the researcher. One of the biggest advantages was the possibility of a closed nuclear cycle, which would make it possible to recycle nuclear material from current power plants – such as plutonium – to use as a fuel source in the reactors. “In this respect, the discontinuation of the Astrid project may in the long run call into question the very existence of the La Hague reprocessing plant,” she says. This plant processes used fuel, a portion of which (plutonium in particular) is reused in reactors, in the form of MOX fuel. “Without reactors that can use reprocessed materials effectively, it’s difficult to justify its existence.”

Read more on I’MTech: MOx strategy and the future of French nuclear plants

“From the beginning, our interviews showed that it was difficult for the Astrid stakeholders to define the status of the project precisely,” explains Stéphanie Tillement. The concept proposed when applying for funding was that of an industrial demonstrator. The goal was therefore to build a reactor within a relatively short period of time, which could produce energy on a large scale based on technology for which there was already a significant amount of operating experience. But the CEA also saw Astrid as a research project, to improve the technology and develop new design options. This would require far more time. “As the project advanced,” adds the researcher, “the CEA increasingly focused on a research and development approach. The concept moved away from previous reactors and its development was delayed. When they had to present the roadmap in 2018, the project was at a ‘basic design’ stage and still needed a lot of work, as far as design was concerned, but also in terms of demonstrating compliance with nuclear safety requirements.”

An abandoned or postponed project?

Stéphanie Tillement confirms that, “the Astrid project, as initially presented, has been permanently abandoned.” Work on the sodium technology is expected to be continued, but the construction of a potential demonstrator of this technology will be postponed until the second half of the 21st century. “It’s a short-sighted decision,” she insists. Uranium, which is used to operate reactors, is currently inexpensive. So there’s no need to turn to more sustainable resources – at least not yet. But abandoning the Astrid project means running the risk of losing the expertise acquired for this technology. Though some research may be continued, it will not be enough to maintain industrial expertise in developing new reactors, and the knowledge in this sector could be lost. “The process of regaining lost knowledge,” she says, “is ultimately as expensive as starting from scratch.”

A short-term decision, therefore, relying instead on EPR, 3rd-generaton reactors. But the construction of this type of reactor in Flamanville also faces its own set of hurdles. According to Stéphanie Tillement, “the challenges the Astrid project encountered are similar to those of the EPR project.” To secure funding for such projects, nuclear industry stakeholders seek to align themselves with the short timeframes of the political world. Yet, short deadlines are ultimately unrealistic and inconsistent with the timeframes for developing nuclear technology, and even less so when it’s a matter of the first of a series. This creates problems for nuclear projects – they fall behind schedule and their costs rise dramatically. In the end, this makes politicians rather wary of funding this sort of project. “So nuclear energy gets stuck in this vicious circle,” says the researcher, “in a world that’s increasingly unfavorable to this sector.”

This decision also aligns with the government’s energy strategy. In  broad terms, the State has announced that nuclear energy will be reduced to 50% of France’s energy mix, in favor of renewable energies. “The problem,” says Stéphanie Tillement, “is that we only have an outline. If there’s a political strategy on nuclear issues, it remains unclear. And there’s no long-term position – this is a way of  leaving the decision to future decision-makers. But making no decision is a decision. Choosing not to pursue the development of technologies which require a long time to develop may implicitly mean abandoning the idea of any such development in the future. Which leads some to consider, rather cynically, that politicians must think that when need it, we’ll buy the required technology from other powers (China, Russia) who have already developed it.”

Unéole

Unéole on our roofs

We know how to use wind to produce electricity, but large three-bladed turbines do not have their place in urban environments. The start-up Unéole has therefore developed a wind turbine that is suitable for cities, as well as other environments. It also offers a customized assessment of the most efficient energy mix. Clovis Marchetti, a research engineer at Unéole, explains the innovation developed by the start-up, which was incubated at IMT Lille Douai.

 

The idea for the start-up Unéole came from a trip to French Polynesia, islands that are cut off from the continent, meaning that they must be self-sufficient in terms of energy. Driven by a desire to develop renewable energies, Quentin Dubrulle focused on the fact that such energy sources are scarce in urban areas. Wind, in particular, is an untapped energy source in cities. “Traditional, three-bladed  wind turbines are not suitable,” says Clovis Marchetti, a research engineer at Unéole. “They’re too big, make too much noise and are unable to capture the swirling winds created by the corridors between buildings.”.

Supported by engineers and researchers, Quentin Dubrulle put together a team to study the subject. Then, in July 2014 he founded Unéole, which was incubated at IMT Lille Douai.  Today the start-up proposes an urban turbine measuring just under 4 meters high and 2 meters wide that can produce up to 1,500 kWh per year. It is easy to install on flat roofs and designed to be used in cities, since it captures the swirling winds found in urban environments.

Producing energy with a low carbon footprint is a core priority for the project. This can be seen in the choice of materials and method of production. The parts are cut by laser, a technology that is well-understood and widely used by many industries around the world. So if these wind turbines have to be installed on another continent, the parts can be cut and assembled on location.

Another important aspect is the use of eco-friendly materials. “This is usually a second step,” says Clovis Marchetti, “but it was a priority for Unéole from the very beginning.” The entire skeleton of the turbine is built with recyclable materials. “We use aluminum and  recycled and recyclable stainless steel,” he says. “For the electronics, it’s obviously a little harder.”

Portrait of an urban wind turbine

The wind turbine has a cylindrical shape and is built in three similar levels with slightly curved blades that are able to trap the wind. These blades are offset by 60° from one level to the next. “This improves performance since the production is more uniform throughout the turbine’s rotation.” says Clovis Marchetti. Another advantage to this architecture is that it makes it easy to start: no matter what direction the wind comes from, a part of the wind turbine will be sensitive to it, making it possible to induce movement.

 

Photograph of the urban wind turbine proposed by Unéole.

 

To understand how a wind turbine works, two concepts of aerodynamics are important: lift and drag. In the former, a pressure difference diverts the flow of air and therefore exerts a force. “It’s what makes planes fly for example,” explains Clovis Marchetti. In the latter, the wind blows on a surface and pushes it. “Our wind turbine works primarily with drag, but lift effects also come into play,” he adds. “Since the wind turbine is directly pushed by the wind, its rotational speed will always be roughly equal to the wind speed.”

And that plays a significant role in terms of the noise produced by the wind turbine. Traditional three-bladed turbines turn faster than the wind due to lift. They therefore slice through the wind and produce a swishing noise. “Drag doesn’t create this problem since the wind turbine vibrates very little and doesn’t make any noise.” he says.

An optimal energy mix

The urban wind turbine is not the only innovation proposed by Unéole. The central aim of this project is to combine potential renewable energies to find the optimal energy mix for a given location. As such, a considerable amount of modeling is required in order to analyze the winds on site. That means modeling a neighborhood by taking into consideration all the details that affect wind: topographical relief, buildings, vegetation etc. Once the data about the wind has been obtained from Météo France, the team studies how the wind will behave in a given situation on a case-by-case basis.

“Depending on relief and location, the energy capacity of the wind turbine can change dramatically,” says Clovis Marchetti. These wind studies allow them to create a map in order to identify locations that are best suited for promoting the turbine, and places where it will not work as well. “The goal is to determine the best way to use roofs to produce energy and optimize the energy mix, so we sometimes suggest that clients opt for photovoltaic energy,” he says.

“An important point is the complementary nature of photovoltaic energy and wind turbines,” says Clovis Marchetti. Wind turbines maintain production at night, and are also preferable for winter, whereas photovoltaics are better for summer. Combining the two technologies offers significant benefits at the energy level, for example, uniform production. “If we only install solar panels, we’ll have a peak of productivity at noon in the summer, but nothing at night,”  he explains. This peak of activity must therefore be stored, which is costly and still involves some loss of production. A more uniform production would therefore make it possible to produce energy on a more regular basis without having to store the energy produced.

To this end, Unéole is working on a project for an energy mix platform: a system that includes their urban wind turbines, supplemented with a photovoltaic roof. Blending the two technologies would make it possible to produce up to 50% more energy than photovoltaic panels installed alone.

A connected wind turbine

“We’re also working on making this wind turbine connected,” says Clovis Marchetti. This would provide two major benefits. First, the wind turbine could provide information directly about its production and working condition. This is important so that the owner can monitor the energy supply and ensuring that it is working properly. “If the wind turbine communicates the fact that it is not turning even though it’s windy, we know right away that action is required;” he explains.

In addition, a connected wind turbine could predict its production capacity based on weather forecasts. “A key part of the smart city of tomorrow is the ability to manage consumption based on production,” he says. Today, weather forecasts are fairly reliable up to 36 hours in advance, so it would be possible to adjust our behavior. Imagine, if for example, strong winds were forecast for 3 pm. In this case, it would be better to wait until then to launch a simulation that requires a lot of energy.

Guillaume Balarac

Guillaume Balarac, turbulence simulator

Turbulence is a mysterious phenomenon in fluid mechanics. Although it has been observed and studied for centuries, it still holds secrets that physicists and mathematicians strive to unlock. Guillaume Balarac is part of this research community. A researcher at Grenoble INP (at the LEGI Geophysical and Industrial Flows Laboratory), he uses and improves simulations to understand turbulent flows better. His research has given rise to innovations in the energy sector. The researcher, who has recently received the 2019 IMT-Académie des Sciences Young Scientist Award, discusses the scientific and industrial challenges involved in his field of research.

 

How would you define turbulent flows, which are your research specialty?

Guillaume Balarac: They are flows with an unpredictable nature. The weather is a good example for explaining this. We can’t predict the weather more than five days out, because the slightest disturbance at one moment can radically alter what occurs in the following hours or days . It’s the butterfly effect. Fluid flows in the atmosphere undergo significant fluctuations that limit our ability to predict them. This is typical of turbulent flows, unlike laminar flows which are not subject to such fluctuations and whose state may be predicted more easily.

Apart from air mass movements in the atmosphere, where can turbulent flows be found?

GB: Most of the flows that we may encounter in nature are actually turbulent flows. The movement of oceans is described by turbulent flows, as is that of rivers. The movement of molten masses in the Sun generates a turbulent flow. This is also the case for certain biological flows in our bodies, like blood flow near the heart. Apart from nature, these flows are found in rocket propulsion, the motion  of wind turbines and that of hydraulic or gas turbines etc.

Why do you seek to better understand these flows?

GB: First of all, because we aren’t able to do so! It’s still a major scientific challenge. Turbulence is a rather uncharacteristic example – it has been observed for centuries. We’ve all seen a river or felt the wind. But the mathematical description of these phenomena still eludes us. The equations that govern these turbulent flows have been known for two centuries. And the underlying mechanics have been understood since ancient times.  And yet, we aren’t able to solve these equations and we’re ill-equipped to model and understand these events.

You say that researchers can’t solve the equations that govern turbulent flows. Yet, some weather forecasts for several days out are accurate…

GB: The iconic equation that governs turbulent flows is the Navier-Stokes equation. That’s the one that has been known since the 19th century. No one is able to find a solution with a pencil and paper. Finding a unique, exact solution to this equation is even one of the seven millennium problems established by the Clay Mathematics Institute.  As such, the person who finds the solution will be awarded $1 million. That gives you an idea about the magnitude of the challenge. To get around our inability to find this solution, we either try to approach it using computers, as is the case for weather forecasts  — with varying degrees of accuracy — or we try to observe it. And finding a link between observation and equation is no easy task either!

Beyond this challenge, what can a better understanding of turbulent flows help accomplish?

GB: There are a wide range of applications which require an understanding of these flows and the equations that govern them. Our ability to produce energy relies in part on fluid mechanics, for example. Nuclear power plants function with water and steam systems. Hydroelectric turbines work with water flows, as do water current turbines. For wind turbines, it’s air flows.  And these examples are only as far as the energy sector is concerned.

You use high-resolution simulation to understand what happens at the fundamental level in a turbulent flow. How does that work?

GB: One of the characteristics of turbulent flows are eddies. The more turbulent the flow, the more eddies of varying sizes it has. The principle of high resolution simulation is to define billions of points in the space in which the flow is produced, and calculate the fluid velocity at each of these points. This is called a mesh, and it must be fine enough to describe the smallest eddy in the flow. These simulations use the most powerful supercomputers in France and Europe. And even with all that computing power, we can’t simulate realistic situations – only academic flows in idealized conditions . These high-resolution simulations allow us to observe and better understand the dynamics of turbulence in canonical configurations.

Simulation des écoulements turbulents sur une hydrolienne.

Simulation of turbulent flows on a marine turbine.

Along with using these simulation tools, you work on improving them. Are the two related?

GB: They are two complementary approaches. The idea for that portion of my research is to accept that we don’t have the computing power to simulate the Navier-Stokes equation in realistic configurations. So the question I ask myself is – how can this equation be modified so that it can be possible to solve with our current computers, while ensuring that the prediction is still reliable? The approach is to solve the big eddies first. And since we don’t have the power to make a fine enough mesh for the small eddies, we look for physical terms, mathematical expressions, which replace the influence of the small eddies on the big ones. That means that we don’t have the small eddies in this modeling, but their overall contribution to flow dynamics is taken into account. This helps us improve simulation tools by making them able to address flows in realistic conditions.

Are these digital tools you’re developing used solely by researchers?

GB: I seek to carry out research that is both fundamental and application-oriented. For example, we worked with Hydroquest, on the performance of water current turbines to generate electricity. The simulations we carried out made it possible to assess the performance loss due to the support structures, which do not contribute to capturing the energy from the flow. Our research led to patents for new designs, with a 50% increase in yield.

More generally, do energy industry players realize how important it is to understand turbulent flows in order to make their infrastructures more efficient?

GB: Of course, and we have a number of partners who illustrate industrial interest for our research.    For example, we’ve adopted the same approach to improve the design of floating wind turbines. We’re also working with General Electric on hydroelectric dam turbines. These hydraulic turbines are increasingly being used to operate far from their optimal operating point, in order to mitigate the intermittence of renewable solar or wind energy.  In these systems, hydrodynamic instability develops, which has a significant effect on the machines’ performance. So we’re trying to optimize the operation of these turbines to limit yield loss.

What scientific challenges do you currently face as you continue your efforts to improve simulations and our understanding turbulent flows?

GB: At the technical level, we’re trying to improve our simulation codes to take full advantage of advances in supercomputers. We’re also trying to improve our numerical methods and models to increase our predictive capacity.  For example, we’re now trying to integrate learning tools to avoid simulating small eddies and save computing time. I’ve started working with Ronan Fablet, a researcher at IMT Atlantique, on precisely this topic. Then, there’s the huge challenge of ensuring the reliability of the simulations carried out. As it stands now, if you give a simulation code to three engineers, you’ll end up with different models. This is due to the fact the tools aren’t objective, and a lot depends on the individuals using them. So we’re working on mesh and simulation criteria that are objective. This should eventually make it possible for industry players and researchers to work with the same foundations,  and better understand one another when discussing turbulent flows.

 

Fukushima: 8 years on, what has changed in France?

Fukushima was the most devastating nuclear disaster since Chernobyl. The 1986 disaster led to radical changes in international nuclear governance, but has the Japanese catastrophe had the same effect? This is what the AGORAS project is trying to find out. IMT Atlantique, the IRSN, Mines ParisTech, Orano and SciencesPo are all working on the AGORAS project, which aims to understand the impact of Fukushima on France’s nuclear industry. Stéphanie Tillement, a sociologist at IMT Atlantique explains the results of the project, which is coming to an end after 6 years of research.

 

Why do we need to know about the consequences of a Japanese nuclear incident in France?

Stéphanie Tillement: Fukushima was not just a shock for Japan. Of course, the event influenced everywhere that uses nuclear energy as an important part of energy production, such as Europe, North America, and Russia; but it also affected less nuclearized countries. Fukushima called into question the safety, security and reliability of nuclear power plants. Groups which are strongly involved in the industry, such as nuclear operators, counter-experts, associations and politicians, were all affected by the event. Therefore, we expected that Fukushima would have a strong impact on nuclear governance. There is also another, more historical, reason; both the Chernobyl and Three Mile Island accidents had an impact on the organization of the nuclear industry. So, Fukushima could be part of this trend.

How did Chernobyl and Three Mile Island impact the industry?

ST: The consequences of nuclear disasters are generally felt 10 to 20 years after the event itself. In France, Chernobyl contributed to the 2006 French Nuclear Safety and Transparency Act, which marked a major change in the nuclear risk governance system. This law notably led to the creation of the French Nuclear Safety Authority, ASN. A few years earlier, the French Radioprotection and Nuclear Safety Institute, IRSN, was created. The 2006 law still regulates the French nuclear industry today. The Three Mile Island disaster caused the industry to question people’s involvement in these complex systems, notably in terms of human error. This led to major changes in human-computer interfaces within nuclear infrastructure, and the understanding of human error mechanisms.

Has the Fukushima accident led to similar changes?

ST: The disaster was in 2011; it’s not even been 10 years since it happened. However, we can already see that Fukushima will probably not have the same affect in France as the other accidents. Rather than criticizing the French system, industry analysis of Fukushima has emphasized the benefits of France’s current mode of governance. Although technical aspects have undergone changes, particularly regarding Complementary Safety Assessments (CSR), the relationships between nuclear operators, the ASN and the IRSN have not changed after Fukushima.

Why has this disaster not considerably affected the French mode of governance?

ST: At first, the French nuclear industry thought that the Fukushima disaster was unlikely to happen in France, as the Japanese power plant was managed in a completely different way. In Japan, several operators share the country’s nuclear power plants. When analyzing crisis management, the post-accident report showed that the operator’s independence was not enforced, and that there was collusion between the government, the regulators and the operators. In France, the Nuclear Safety and Transparency Act strictly regulates relationships between industry operators and assures that each operator has their independence. This is a strength of the French governance model that is recognized internationally. As well as this, French nuclear power plants are managed by only one operator, EDF, which controls 58 identical plants. The governance issues in Japan reassured French operators, as they confirmed that legally enforcing the independence of the regulatory authority was the right thing to do.

How did the anti-nuclear movement respond to this lack of change?

ST: During our investigations into Fukushima, we realized that the accident did not create any new anti-nuclear movements or opinions. Opposition already existed. There is no denying that the event gave these anti-nuclear organizations, collectives and experts some material, but this didn’t radically change their way of campaigning nor their arguments. This again shows how Fukushima did not cause major changes. The debate surrounding the nuclear industry is still structured in the same way as it was before the disaster.

Does that also mean that there have been no political consequences post-Fukushima?

ST: No, and that’s also one of the findings of the AGORAS project. Recent political decisions on nuclear sector strategy have been mainly made according to processes established before the Fukushima accident. For example, the cancellation of the ASTRID project was not due to a radical political change in the nuclear sector, but actually because of economic arguments and a lack of political desire to tackle the subject. Clearly, politicians do not want to tackle these issues, as the decisions they make have an impact in 10, 20, or even 30 years’ time. This just doesn’t work with their terms of office. The political turnover also means that very few politicians know enough about the subject, which raises questions about the government’s ability to get involved in nuclear, and therefore energy politics.

Read on I’MTech: What nuclear risk governance exists in France?

Your work suggests that there has been almost no change in any aspect of nuclear governance

ST: The AGORAS project started by asking the question: Did Fukushima cause a change in governance in the same way as the accidents that preceded it? If we look at it from this perspective, our studies say no, due to all the reasons that I’ve already mentioned. However, we need to put this into context. Many things have changed, just not in the same radical way as they did after Chernobyl or Three Mile Island. Amongst these changes, is the modification of certain technical specifications for infrastructure. For example, one of the reasons why ASN called for EDF to review the welding of their EPR reactors was due to technical developments decided following Fukushima. There have also been changes in crisis management and post-accident management.

How have we changed the way we would manage this type of disaster?

ST: Following Fukushima, a rapid response force for nuclear accidents (FARN) was created in France to manage the emergency phase of an accident. Changes were also made to the measures taken during a crisis, so that the civil security and prefects can act more quickly. The most notable changes have been in the post-accident phase. Historically, accident preparation measures were mainly focused on the emergency phase. As a result, different roles are well-defined in this phase. However, Fukushima showed that managing the after-crisis was also equally as important. What’s unique about a nuclear accident, is that it has extremely long-term consequences. However, in Fukushima, once the emergency phase was over, the organization became less defined. No one knew who was responsible for controlling food consumption, soil contamination, or urban planning. Therefore, the local information commissions (CLIS) have worked with nuclear operators to improve the post-accident phase in particular. But, once again, our research has shown that this work was started before the Fukushima disaster. The accident just accelerated these processes and increased the importance of this issue.

Fukushima took place less than 10 years ago; do you plan on continuing your work and studying the implications of the disaster after 10 and 20 years have passed?

ST: We would particularly like to continue to address other issues and to develop our results further. We have already carried out field research with ASN, IRSN, local information commissions, politicians, associations, and manufacturers such as Framatome or Orano. However, one of the biggest limitations to our work is that we cannot work with EDF, who is a key player in nuclear risk governance. In the future, we want to be able to work with plant operators, so we can study the impact of an accident on their operations. As well as this, politicians’ understanding could also be improved.  Understanding politicians’ opinions regarding nuclear governance, and the nuclear strategy decision-making process is a real challenge.