H2sys

H2sys: hydrogen in the energy mix

I’MTech is dedicating a series of articles to success stories from research partnerships supported by the Télécom & Société Numérique Carnot institute (TSN), to which IMT and Femto Engineering belong.

[divider style=”normal” top=”20″ bottom=”20″]

Belles histoires, Bouton, CarnotH2sys is helping make hydrogen an energy of the future. This spin-off company from the FCLAB and Femto-ST laboratories in Franche-Comté offers efficient solutions for integrating hydrogen fuel cells. Some examples of these applications include generators and low-carbon urban mobility. And while the company was officially launched only 6 months ago, its history is closely tied to the pioneers of hydrogen technology from Franche-Comté.

 

1999, the turn of the century. Political will was focused on the new millennium and energy was already a major industrial issue. The end of the 90s marked the beginning of escalating oil prices after over a decade of price stability. In France, the share of investment in nuclear energy was waning. The quest for other forms of energy production had begun, a search for alternatives worthy of the 2000s. This economic and political context encouraged the town of Belfort and the local authorities of the surrounding region to invest in hydrogen. Thus, the FCLAB research federation was founded, bringing together relevant laboratories related to this theme. Almost two decades later, Franche-Comté has become a major hub for the discipline. FCLAB is the first national applied research community to work on hydrogen energy and the integration of fuel cell systems. It also integrates a social sciences and humanities research approach which looks at how our societies adopt new hydrogen technologies. This federation brings together 6 laboratories including FEMTO-ST and is under the aegis of 10 organizations, including the CNRS.

It was from this hotbed of scientific activity that H2sys was born. Described by Daniel Hissel, one of its founders, as “a human adventure”, the young company’s history is intertwined with that of the Franche-Comté region.  First, because it was created by scientists from FCLAB. Daniel Hissel is himself a professor at the University of Franche-Comté and leads a team of researchers at Femto-ST, both of which are partners of the federation. Secondly, because the idea at the heart of the H2sys project grew out of regional activity in the field of hydrogen energy. “As a team, we began our first discussions on the industrial potential of hydrogen fuel cell systems early as 2004-2005,” Daniel Hissel recalls.  The FCLAB teams were already working on integrating these fuel cells into energy production systems. However, the technology was not yet sufficiently mature. The fundamental work did not yet target large-scale applications.

Ten more years would be needed for the uses to develop and for the hydrogen fuel cell market to truly take shape. In 2013, Daniel Hissel and his colleagues watched intently as the market emerged. “All that time we had spent working to integrate the fuel cell technology provided us with the necessary objectivity and allowed us to develop a vision of the future technical and economic issues,” he explains. The group of scientists realized that it was the right time to start their business. They created their project the same year. They quickly received support from the Franche-Comté region, followed by the Technology Transfer Accelerator (SATT) in the Grand Est region and the Télécom & Société Numérique Carnot institute. In 2017, the project officially became the company H2sys.

Hydrogen vs. Diesel?

The spin-off now offers services for integrating hydrogen fuel cells based on its customers’ needs. It focuses primarily on generators ranging from 1 to 20 kW. “Our goal is to provide electricity to isolated sites to meet needs on a human scale,” says Daniel Hissel. The applications range from generating electric power for concerts or festivals to supporting rescue teams responding to road accidents or fires. The solutions developed by H2sys integrate expertise from FCLAB and Femto-ST, whose research involves work in system diagnosis and prognosis aimed at understanding and anticipating failures, lifespan analysis, predictive maintenance and artificial intelligence for controlling devices.

Given their uses, H2sys systems are in direct competition with traditional generators which run on combustion engines—specifically diesel. However, while the power ranges are similar, the comparison ends there, according to Daniel Hissel, since the hydrogen fuel cell technology offers considerable intrinsic benefits. “The fuel cell is powered by oxygen and hydrogen, and only emits energy in the form of electricity and hot water,” he explains. The lack of pollutant emissions and exhaust gas means that these generators can be used inside as well as outside. “This is a significant benefit when indoor facilities need to be quickly installed, which is what firefighters sometimes must do following a fire,” says the co-founder of the company.

Another argument is how unpleasant it is to work near a diesel generator. Anyone who has witnessed one in use understands just how much noise and pollutant emissions the engine generates. Hydrogen generators, on the other hand, are silent and emit only water. Their maintenance is also easier and less frequent: “Within the system, the gases react through an electrolyte membrane, which makes the technology much more robust than an engine with moving parts,” Daniel Hissel explains. All of these benefits make hydrogen fuel cells an attractive solution.

In addition to generators, H2sys also works on range extenders.  “This is a niche market for us because we do not yet have the capacity to integrate the technology into most vehicles,” the researcher explains. However, the positioning of the company does illustrate the existing demand for solutions that integrate hydrogen fuel cells. Daniel Hissel sees even more ambitious prospects. While the electric yield of these fuel cells is much better than those of diesel engines (55% versus 35%), the hot water they produce can also be recovered for various purposes. Many different options are being considered, including a water supply network for isolated sites, or for household consumption in micro cogeneration units for electricity and heating.

But finding new uses through intelligent integrations is not the only challenge facing H2sys. As a spin-off company from research laboratories, it must continue to drive innovation in the field. “With FCLAB, we were the first to work on diagnosing hydrogen fuel cell systems in the 2000s,” says Daniel Hissel. “Today, we are preparing the next move.” Their sights are now set on developing better methods for assessing the systems’ performance to improve quality assurance. In contributing to making the technology safer, H2SYS is heavily involved in developing fuel cells. And the technology’s maturation since the early 2000s is now producing results: hydrogen is now attracting the attention of manufacturers for the large-scale storage of renewable energies. Will this technology therefore truly be that of the new millennium, as foreseen by the pioneers of the Franche-Comté region in the late 90s? Without going that far, one thing is certain: it has earned its place in the energy mix of the future.

 

[box type=”shadow” align=”” class=”” width=””]

A guarantee of excellence
in partnership-based research since 2006

 

Having first received the Carnot label in 2006, the Télécom & Société numérique Carnot institute is the first national “Information and Communication Science and Technology” Carnot institute. Home to over 2,000 researchers, it is focused on the technical, economic and social implications of the digital transition. In 2016, the Carnot label was renewed for the second consecutive time, demonstrating the quality of the innovations produced through the collaborations between researchers and companies.

The institute encompasses Télécom ParisTech, IMT Atlantique, Télécom SudParis, Institut Mines-Télécom Business School, Eurecom, Télécom Physique Strasbourg and Télécom Saint-Étienne, École Polytechnique (Lix and CMAP laboratories), Strate École de Design and Femto Engineering. Learn more [/box]

xenon, xenon1t, Dominique Thers, IMT Atlantique

Xenon instruments for long-term experiments

From the ancient gnomon that measured the sun’s height, to the Compton gamma ray observatory, the microscope, and large-scale accelerators, scientific instruments are researchers’ allies, enabling them to make observations at the smallest and largest scales. Used for both basic and applied research, they help test hypotheses and push back the boundaries of human knowledge. At IMT Atlantique, researcher Dominique Thers, motivated by the development of instruments, has become a leading expert on xenon technologies, which are used in the search for dark matter as well as in the medical field.

 

The search for observable matter

Detecting dark matter for the first time is currently one of science’s major challenges. “It would be a little like radioactivity at the end of the 19th century, which disrupted the Maxwell-Boltzmann equations,” Dominique Thers explains. Based on the velocity measurements of seven galaxies carried out by Swiss astronomer Fritz Zwicky in 1933, which contradicted the known mass of these galaxies, a hypothesis was made that a type of matter existed that is unobservable using the currently available means and that represents 27% of the matter in the universe. “Unobservable” means that the particles that form this matter interact with traditional baryonic particles (protons, neutrons, etc.) in a very unusual manner. To detect them, the probability of this type of interaction occurring must be radically increased, and there must be a way of fully ensuring that no false event can trigger the alert.

In this race to be the first to detect dark matter, developing more powerful instruments is paramount.  “The physics of particle detectors is a discipline that has become increasingly complex,” he explains, “it is not sufficiently developed in France and around the world, and it currently requires significant financial resources, which are difficult to obtain.” China and the United States have greatly invested in this area, and Germany is the most generous contributor, but there are currently few French teams: “It is a very tense context.” Currently, the most sensitive detector for hunting down dark matter is located in Italy, where it was built under the mountain of Gran Sasso for the XENON1T experiment. Detection is based on the hope of an interaction between a particle of dark matter and one of the xenon atoms, which in this experiment are liquid. The energy deposited from this type of interaction generates two different phenomena –  scintillation and ionization – which are observable and can be used to distinguish the background. 150 people from 25 international teams are working together on this experiment at the largest underground laboratory in the world.

This research that has spanned generations must be justified. “Society asks, what is the purpose is of observing the nature of dark matter? We may only find the answer 25 years from now,” the researcher explains. Dark matter represents enormous potential: five times more prevalent than ordinary matter, it is a colossal reservoir of energy. The field has greatly developed in 30 years, and xenon has opened a new area of research, with prospects for the next 20 years. Dominique Thers is participating in European reflection for experiments in 2025, with the goal of achieving precise observations at lower ranges.

Xenon, a rare, expensive and precious gas

While the xenon used for this experiment possesses remarkable properties (density, no radioactive isotopes), it is unfortunately a raw material that is rare and cannot be manufactured. It is extracted by distilling air in its liquid phase, using a very costly process. Xenon is indeed present in the air at 0.1 ppm (parts per million), or “a tennis ball of xenon gas in the volume of a hot air balloon,” Dominique Thers explains, or “one ton of xenon from 2,000,000 tons of liquid oxygen“.

The French company Air Liquide is the global leader in the distribution of xenon. The gas is used to create high-intensity lights, as a propellant for space travel and as an anesthetic. It is their diamond “in a luxury market subject to speculation.” And luxury products require luxury instruments. For those created by the researcher’s team, xenon is used in its purest form possible. “The objective is to have less than one ppb (part per billion) of oxygen in the liquid xenon,” the scientist explains. This is made possible due to a closed-circuit purification system that continuously cleans the equipment, particularly from all the impurities from the walls.

 

xenon

The technology of xenon in the form of cryogenic liquid is reserved for the experts. Dominique Thers’ team has patented expertise in storing, distributing and recovering ultra-pure liquid xenon.

 

In a measurement experiment like the one for dark matter, there is zero toleration for radioactive background noise. “Krypton is one of xenon’s natural contaminants, and it is the original source of xenon after cryogenic distillation produces 94% krypton and 6% xenon,” the researcher explains. However, the isotope krypton-85 is created by human activities. In the XENON1T experiment, we start with a few ppm (parts per million) of natural krypton present in the xenon, which is far too much. “All the types of steel used in the instrument are selected and measured before they come in contact with the liquid xenon,” the researcher adds, explaining that in this instance they obtained the lowest measurement of background noise using an experimental device.

The first promising results will be published in early 2018, and the next stages are already taking shape. The experiment that will start in 2019, XENONnT, which will use 60% of the equipment from XENON1T, aims to achieve even greater precision. Competition is fierce with the LZ teams in the USA and PandaX teams in China. “We can’t let anyone get ahead of us in this complicated quest in which, for the first time, we want to observe something new,” Dominique Thers emphasizes. He estimates that, all told, 50 to 100 tons of extra-pure xenon will be needed to refute the possible presence of observable dark matter or, on the contrary, measure its mass, describe its properties and identify possible applications.

Xenon cameras in oncology

When working with this type of trans-generational research, parallel research within shorter time frames must be carried out. This is especially true in France, where the research structure makes it difficult to fully commit to instrumentation activities.  Budgets for funding applied research are hard to come by, and researchers also devote time to teaching activities. It would be a shame if so much expertise developed over time failed to make a groundbreaking discovery due to a lack of funds or time.

To avoid this fate, Dominique Thers and his team have succeeded in creating a virtuous circle. “We’ve been quite lucky,” the researcher says with a smile. “We have been able to develop local activities with a community that also needs to make advancements that could be made possible through medical imaging using liquid xenon.” At the university hospital (CHU) in Nantes there is a leading team of specialists in cancer therapy and engineering who understand the advantages xenon cameras represent. The cancer specialists’ objective is to provide patients with better support, and better understand each patient’s response to treatment. In the context of an ongoing State-Regional Planning Contract (CPER), the scientist convinced them to invest in this technology, “because with a new instrument, anything is possible.”

The current PET (Positron-emission tomography) imaging techniques use solid-state cameras that require rare-earth elements, and only a dozen patients a day are effectively screened using this technology. Xenon cameras, which use Compton imaging, the only technique that can trace the trajectory of a single photon, uses triangulation methods to ensure the 3D localization of the areas where the medicine has been applied. The level of precision is therefore improved and opens the way to possible benefits to treat more patients daily or monitor the progress of the treatment more regularly. The installation at the CHU in Nantes is scheduled for 2018, initially for tests on animals before 2020. This should convince manufacturers to make a camera adapted to producing an image of the entire human body, which would also undoubtedly require several million euros of investments, but this time with a potential market of several billion euros.

Just like the wave–particle duality so treasured by physicists, Dominque Thers and his team have two simultaneous facets. “They could have a short-term impact on society, while at the same time opening new perspectives in our understanding of the universe,” the scientist explains.

[author title=”Pushing the limits of nature” image=”https://imtech-test.imt.fr/wp-content/uploads/2018/02/Portrait_réduit.jpg”]Dominique Thers believes he “fell into research accidentally”. As someone who enjoyed the hard sciences and mathematics, he met researchers during an internship in astronomy and particle physics. “The human side convinced me to give it a try,” and he began working on a thesis with Georges Charpak, which they defended in 2000. He joined IMT Atlantique (formerly Mines Nantes) in 2001, and since 2009 he has been in charge of the Xenon team which is part of the Subatech department, a Mixed Research Unit (UMR) with the University of Nantes and the CNRS. This mixed aspect is also present in the cultural diversity of the PhD and post doctorate students that come from all over the globe. The researcher’s motivation is whole-hearted: “It’s wonderful to be exposed to the limits of nature. Nature prevented us from going any further, and our instruments are going to allow us to cross this border. “The young researchers, who are exposed to scientific culture, perceive these limits and are drawn to the leading teams in this field. Dominique Thers is also an entrepreneur; in 2012, with three PhD students, he founded the AI4R startup specialized in medical instrumentation.[/author]

smart grid

What is a smart grid?

The driving force behind the modernization of the electrical network, the smart grid is full of promise. It will mean savings for consumers and energy companies alike. It terms of the environment, it provides a solution for developing renewable energies. Hossam Afifi, a researcher in networks at Télécom SudParis gives us a behind-the-scenes look at the smart grid.

 

What is the purpose of a smart grid?

Hossam Afifi: The idea behind a smart grid is to create savings by using a more intelligent electric network. The final objective is to avoid wasting energy, ensuring that each watt produced is used. We must first understand that today, the network is often run by electro-mechanical equipment that dates back to the 1960s. For the sake of simplicity, we will say it is controlled by engineers who use switches to remotely turn on or off the means of production and supply neighborhoods with energy. With the smart grid, all these tasks will be computerized. This is done in two steps. First, by introducing a measuring capacity using connected sensors and the Internet of Things. The control aspect is then added through machine learning to intelligently run the networks based on the data obtained via sensors, without any human intervention.

 

Can you give us some concrete examples of what the smart grid can do?

HA: One concrete example is the reduction of energy bills for cities, municipal authorities and hence local taxes, and major infrastructures. A more noticeable example is the architectural projects for buildings that feature both offices and housing, which are aimed at evening out the amount of power consumed over the course of the day and limiting excessive peaks in energy consumption during high-demand hours. The smart grid rethinks the way cities are designed. For example, business areas are not at all advantageous for energy suppliers. They require a lot of energy over short periods of time, especially between 5pm and 7pm. This requires generators to be used to ensure the same quality of service during these peak hours. And having to turn them on and off represents costs. The ideal solution would be to even out the use of energy, making it easier to optimize the service provided.  This is how the smart grid dovetails with smart city issues.

 

The smart grid is also presented as a solution to environmental problems. How are the two related?

HA: There is something very important we must understand: energy is difficult to store. This is one of the limits we face in the deployment of renewable energies, since solar, wind and marine energy sometimes produce electricity at times when we don’t need it. However, a network that can intelligently manage the energy production and distribution is beneficial for renewable energies. For example, electric car batteries can be used to store the energy produced by renewable sources. During peaks in consumption, users can choose to disconnect from the conventional network and use the energy stored by their car in the garage and receive financial compensation from their supplier. This is only possible with an intelligent network that can adapt the offer in real time based on large amounts of data on production and consumption.

 

How important is data in the deployment of smart grids?

HA: It is one of the most important aspects, of course. All of the network’s intelligence relies on data; it is what feeds the machine learning algorithms. This aspect alone requires support provided by research projects. We have submitted one proposal to the Saclay federation of municipalities, for example. We propose to establish data banks to collect data on production and consumption in that area. Open data is an important aspect of smart grid development.

 

What are the barriers to smart grid deployment?

HA: One of the biggest barriers is that of standardization. The smart grid concept came from the United States, where the objective is entirely different. The main concern there is to interconnect state networks, which up until now were independent, in order to prevent black-outs. In Europe, we drew on this concept to complement the deployment of renewable energies and energy savings. However, we also need to interconnect with other European states. And unlike the United States, we do not have the same network standards as our German and Italian neighbors. This means we have a lot of work to do at a European level to define common data formats and protocols. We are contributing to this work through our SEAS project led by EDF.

Also read on I’MTech:

[one_half]

[/one_half][one_half_last]

[/one_half_last]

renewable energy storage, stockage énergies renouvelables

What is renewable energy storage?

The storage of green energy is an issue which concerns many sectors, whether for energy transition or for supplying power to connected objects using batteries. Thierry Djenizian, a researcher at Mines Saint-Étienne, explains the main problems to us, focusing in particular on how electrochemical storage systems work.

 

Why is the storage of renewable sources of energy (RSE) important today?

Thierry Djenizian: it has become essential to combat the various kinds of pollution created by burning fossil fuels (emission of nanoparticles, greenhouse gases, etc.) and also to face up to the impending shortage over the next decades. Our only alternative is to use other natural, inexhaustible energy sources (i.e. hydraulic, solar, wind, geothermal and biomass). These sources allow us to convert solar, thermal or chemical energy into electrical and mechanical energy.

However, the energy efficiency of RSEs is considerably inferior to that of fossil fuels. Also, in the cases of solar and wind power, the source is “intermittent” and therefore varies over time. Their implementation requires complex and costly installation processes.

 

How is renewable energy stored?

TD: in a general sense, energy produced by RSE drives the production of electricity which can be stored in systems that are either mechanical (hydraulic dams), electromagnetic (superconducting coils), thermal (latent or sensitive heat) or electrochemical (chemical reactions generating electron exchange).

Electrochemical storage systems are made up of three elements: two electrodes (electronic conductors) separated by an electrolyte (an ion conductive material in the form of a liquid, gel, or ceramic, etc.). Electron transfer occurs on the surface of the electrodes (at the anode in the case of electron loss and at the cathode in the opposite case) and they circulate within the circuit in the opposite direction to that of the ions. There are three main categories of electrochemical storage systems: accumulators or batteries, supercapacitors and fuel cells. For RSEs, the changes produced are ideally stored in accumulators for energy performance and cost reasons.

 

How do electrochemical storage systems work?

TD: Let’s take the example of accumulators (rechargeable batteries). The size of these varies according to the quantity of energy required by the device in question, ranging from a button battery for a watch, through to a car engine. Dimensions aside, accumulators function by using reversible electrochemical reactions.

Let’s consider the example of a discharged lithium-ion accumulator. One of the two electrodes is made out of lithium. When you charge the battery, it receives negative charge (electrons), or in other words, electricity produced by the RSE, since the provision of electrons triggers a chemical reaction, releasing the lithium from the electrode in the form of ions. The ions then migrate through the electrolyte to insert themselves into the second electrode. When all the sites that can accommodate lithium on the second electrode are occupied, the battery is fully charged.

As the battery is discharged, reverse chemical reactions spontaneously occur, re-releasing the lithium ions which make their way back to the starting point. This allows for a current to be recovered which corresponds to the movement of previously stored charges.

 

What difficulties are associated with electrochemical storage systems?

TD: Every approach has its own set of advantages and disadvantages. For example, fuel cells present the issue of high costs due to the use of platinum which speeds up the chemical reactions. Additionally, hydrogen (the fuel within the cell) produces many restrictions in terms of production and security. Hydrogen is also hard to obtain in large quantities through a source other than hydrocarbon compounds (fossils) and is also an explosive, meaning there are restrictions in terms of storage.

Supercapacitors are the preferred system for devices requiring a strong power supply over a short period of time. In basic terms, they allow a small amount of charge to be stored but can redistribute this very quickly. They can be found in the systems that power the opening of airplane doors for example, as these need a powerful energy supply for a short period of time. They are also used in hybrid car engines.

Conversely, the accumulators we talked about before allow a large number of charges to be stored but their release is slow. They are very energy efficient but not very powerful. In some ways, the two options are complementary.

Let’s compare these measures to another system in which electrons are represented as a liquid. Supercapacitors are like a glass of water. Accumulators would then be comparable to a jug, in that they offer a much larger water (or charges) storage capacity than the glass. However, the jug has a narrow neck, preventing liquid from being poured quickly. The ideal would be to have a jug which could release its contents and then be refilled easily like the glass of water. This is precisely the subject of current research which is based on finding systems able to obtain large densities of energy and power.

 

Which systems are best suited for the use of renewable energy as a power source?

TD: The field of potential applications is extremely vast as it encompasses all the growing energy needs that we need to satisfy. It includes everything from large installations (smart grid) to providing power for portable microelectronic devices (connected objects), and even transport (electric vehicles). In the latter, the battery directly influences the performance of the environmentally-friendly automobiles.

Today, lithium-ion batteries can considerably improve the technical characteristics of electric vehicles, making their usage possible. However, the energy efficiency of these accumulators is still 50 times lower than that of hydrocarbons. In order to produce batteries that are able to offer a credible electric car to the market, the main thing that needs to be done is to increase the energy storage capacity in the batteries. Indeed, getting the maximum amount of energy possible from the smallest possible volume is the challenge faced by all transport. The electric car is no exception.

Also read on I’MTech :

dam break, optical fiber

Dam-break phenomenon in optical fibers

Last May, for the first time, researchers from the University of Lille and IMT Lille-Douai exposed a photon dam breaking in an optical fiber. These results provided the experimental confirmation for an old theory that dates back over 50 years. Above all, this research reveals a surprising analogy with a water dam break, creating the prospect of collaboration between physicists from different fields.

 

The problem facing a physicist working on the way water flows after a dam failure, is that there are not many opportunities to compare theory with experience. Despite their occasional portrayal in works of fiction, researchers are rarely sociopaths. Therefore, very few scientists would be ready to blow up a dam in the mountains to experimentally confirm the results of their equations. They must therefore find alternative solutions. One such solution could very well be to simulate a dam break using an optical fiber and a laser. At least this is what is suggested in the results published one June 20 in the journal Physical Review Letters. Five optical physics researchers from the University of Lille and IMT Lille-Douai authored this scientific article: Gang Xu, Matteo Conforti, Alexandre Kudlinski, Arnaud Mussot and Stefano Trillo.

For the first time, this team from the Physics of lasers, atoms and molecules laboratory (PHLAM) observed a dam of photons—the elementary particles of light—breaking in an optical fiber. “A photon dam is a light pulse from a laser with two very different levels of intensity,” explains Arnaud Mussot, a researcher at IMT Lille-Douai and a PHLAM member. The light signal is made up of a group of photons with a high energy level (those sent with high intensity), and another group with a lower energy level. This situation is similar to a true dam, in which the altitude of the water replaces the energy of the photons.

By comparing the light signal’s shape at the input and output of an optical fiber measuring 15 kilometers long, the researchers observed an alteration in the signal. The two very distinct plateaus, with the high energy photons at the top and the low energy photons on the bottom, were disrupted during transit. The output signal showed a third plateau in the middle, revealing that some photons had lost energy. In addition, “the plateaus were now connected via very slow, oblique wave fronts, which are rarefaction waves and shock waves,” Arnaud Mussot explains. Just like waves that span out when the wall retaining the water breaks.

The reason the comparison between these two dam types is so significant, is that both events are similar from a mathematical point of view. Similar equations describe the photons’ behavior within the optical fiber and water that rushes out after a break in the infrastructure. “In order to make them identical, we had to configure our experiment a certain way,” the researcher explains. They specifically needed to prevent signal dropouts caused by propagation along the length of the cable, since no analogy of this phenomenon exists in the water dam scenario. “We set up an original system to compensate for these dropouts by using an in situ optical amplifier,” Arnaud Mussot adds.

 

(a) Photograph of the Teton River dam breaking in Idaho, USA, in 1976. Results from the input (b) and output (c) of an optical fiber. The red arrows show the analogy between the difference in the water altitude and the difference in the photons’ energy levels. The green areas show the characteristics of the wave’s chaotic behavior following the dam break.

 

There are limits, however, to the similarity between the two kinds of dams, and they are only clearly established in very specific situations in which the modeling equations are valid. Local topography—which is very specific—will probably require additional corrective terms to be included in the models. However, this analogy remains “a good method for observing experimental results of theoretical equations, and validating them through experiments,” explains Arnaud Mussot. These scientific results will therefore enable connections to be made between scientists working on both subjects. The communities will be able to provide each other with their respective results, to the benefit of both fields.

réseau de chaleur, heating networks

Improving heating network performance through modeling

At the IMT “Energy in the Digital Revolution” conference held on April 28, Bruno Lacarrière, an energetics researcher with IMT Atlantique, presented modeling approaches for improving the management of heating networks. Combined with digital technology, these approaches support heating distribution networks in the transition towards smart management solutions.

The building sector accounts for 40% of European energy consumption. As a result, renovating this energy-intensive sector is an important measure in the law on the energy transition for green growth. This law aims to improve energy efficiency and reduce greenhouse gas emissions. In this context, heating networks currently account for approximately 10% of the heat distributed in Europe. These systems deliver heating and domestic hot water from all energy sources, although today the majority are fossil-fueled. “The heating networks use old technology that, for the most part, is not managed in an optimal manner. Just like smart grids for electrical networks, they must benefit from new technology to ensure better environmental and energy management,” explains Bruno Lacarrière, a researcher with IMT Atlantique.

 

Understanding the structure of an urban heating network

A heating network is made up of a set of pipes that run through the city, connecting energy sources to buildings. Its purpose is to transport heat over long distances while limiting loss. In a given network, there may be several sources (or production units). The sources may come from a waste incineration plant, a gas or biomass heating plant, an industrial residual surplus, or a geothermal power plant. These units are connected by pipelines carrying heat in the form of liquid water (or occasionally vapor) to substations. These substations then redistribute the heat to the different buildings.

These seemingly simple networks are in fact becoming increasingly complex. As cities are transforming, new energy sources and new consumers are being added. “We now have configurations that are more or less centralized, which are at times intermittent. What is the best way to manage the overall system? The complexity of these networks is similar to the configuration of electrical networks, on a smaller scale. This is why we are looking at the whether a “smart” approach could be used for heating networks,” explains Bruno Lacarrière.

 

Modeling and simulation for the a priori and a posteriori assessment of heating networks

To deal with this issue, researchers are working on a modeling approach for the heating networks and the demand. In order to develop the most reliable model, a minimum amount of information is required.  For demand, the consumption data history for the buildings can be used to anticipate future needs. However, this data is not always available. The researchers can also develop physical models based on a minimal knowledge of the buildings’ characteristics. Yet some information remains unavailable. “We do not have access to all the buildings’ technical information. We also lack information on the inhabitants’ occupancy of the buildings,” Bruno Lacarrière points out. “We rely on simplified approaches, based on hypotheses or external references.

For the heating networks, researchers assess the best use of the heat sources (fossil, renewable, intermittent, storage, excess heat…). This is carried out based on the a priori knowledge of the production units they are connected to. The entire network must provide the heat distribution energy service. And this must be done in a cost-effective and environmentally-friendly manner. “Our models allow us to simulate the entire system, taking into account the constraints and characteristics of the sources and the distribution. The demand then becomes a constraint.”

The models that are developed are then used in various applications. The demand simulation is used to measure the direct and indirect impacts climate change will have on a neighborhood. It makes it possible to assess the heat networks in a mild climate scenario and with high performance buildings. The heat network models are used to improve the management and operation strategies for the existing networks. Together, both types of models help determine the potential effectiveness of deploying information and communication technology for smart heating networks.

 

The move towards smart heating networks

Heating networks are entering their fourth generation. This means that they are operating at lower temperatures. “We are therefore looking at the idea of networks with different temperature levels, while examining how this would affect the overall operation,” the researcher adds.

In addition to the modelling approach, the use of information and communication technology allows for an increase in the networks’ efficiency, as was the case for electricity (smart monitoring, smart control). “We are assessing this potential based on the technology’s capacity to better meet demand at the right cost,” Bruno Lacarrière explains.

Deploying this technology in the substations, and the information provided by the simulation tools, go hand in hand with the prospect of deploying more decentralized production or storage units, turning consumers into consumer-producers [1], and even connecting to networks of other energy forms (e.g. electricity networks), thus reinforcing the concept of smart networks and the need for related research.

 

[1] The consumers become energy consumers and/or producers in an intermittent manner. This is due to the deployment of decentralized production systems (e.g. solar panels).

 

This article is part of our dossier Digital technology and energy: inseparable transitions!

 

Also read on I’MTech

Energy transition, Digital technology transition

Digital technology and energy: inseparable transitions

[dropcap]W[/dropcap]hat if one transition was inextricably linked with another? Faced with environmental challenges, population growth and the emergence of new uses, a transformation is underway in the energy sector. Renewable resources are playing a larger role in the production of the energy mix, advances in housing have helped reduce heat loss and modes of transportation are changing their models to limit the use of fossil fuels. But even beyond these major metamorphoses, the energy transition in progress is intertwined with that of digital technology. Electrical grids, like heat networks, are becoming “smart.” Modeling is now seen as indispensable from the earliest stages of design or renovation or buildings.

The line dividing these two transitions is indeed so fine that it is becoming difficult to determine to which category belong the changes taking place in the world of connected devices and telecommunications. For mobile phone operators, power supply management for mobile networks is a crucial issue. The proportion of renewable energy must be increased, but this leads to a lower quality of service. How can the right balance be determined?  And telephones themselves pose challenges for improving energy autonomy, in terms of both hardware and software.

This interconnection illustrates the complexity of the changes taking shape in contemporary societies. In this report we seek to present issues situated at the interface between energy and digital technology. Through research carried out in the laboratories of IMT graduate schools, we outline some of the major challenges currently facing civil society, economic stakeholders and public policymakers.

For consumers, the forces at play in the energy sector may appear complex. Often reduced to a sort of technological optimism without taking account of scientific reality, they are influenced by significant political and economic issues. The first part of this report helps reframe the debate while providing an overview of the energy transition through an interview with Bernard Bourges, a researcher who specializes in this subject.  A European SEAS project is explained as a concrete example of the transformations underway in order to provide a look at the reality behind the promises of smart grids.

[one_half]

[/one_half][one_half_last]

[/one_half_last]

The second part of the report focuses on heat networks, which, like electric networks, can also be improved with the help of algorithms. Heat networks represent 9% of the heat distributed in Europe and can therefore represent a catalyst for reducing energy costs in buildings. Bruno Lacarrière’s research illustrates the importance of digital modeling in the optimization of these networks (article to come). And because reducing heat loss is also important at the level of individual buildings, we take a closer look at Sanda Leteriu’s research on how to improve energy performance for homes.

[one_half]

[/one_half][one_half_last]

[/one_half_last]

The report concludes with a third section dedicated to the telecommunications sector. An overview of Loutfi Nuaymi’s work highlights the role played by operators in optimizing the energy efficiency of their networks and demonstrates how important algorithms are becoming for them. We also examine how electric consumption can be regulated by reducing the demand for calculations in our mobile phones, with a look at research by Maurice Gagnaire. Finally, since connected devices require ever-more powerful batteries, the last article explores a new generation of lithium batteries, and the high hopes for the technologies being developed in Thierry Djenizian’s laboratory.

[one_half]

[/one_half][one_half_last]

[/one_half_last]

 

[divider style=”normal” top=”20″ bottom=”20″]

To further explore this topic:

To learn more about how the digital and energy transitions are intertwined, we suggest these related articles from the IMTech archives:

5G will also consume less energy

In Nantes the smart city becomes a reality with mySMARTLife

Data centers: taking up the energy challenge

The bitcoin and blockchain: energy hogs

[divider style=”normal” top=”20″ bottom=”20″]

energy performance, performance énergétique, sanda lefteriu

Energy performance of buildings: modeling for better efficicency

Sanda Lefteriu, a researcher at IMT Lille-Douai, is working on developing predictive and control models designed for buildings with the aim of improving energy management. A look at the work presented on April 28 at the IMT “Energy in the digital revolution” symposium.

Good things come to those who wait. Seven years after the Grenelle 2 law, a decree published on May 10 requires buildings (see insert) used for the private and public service sector to improve their energy performance. The text sets a requirement to reduce consumption by 25% by 2020 and by 40% by 2030.[1] To do so, reliable, easy-to-use models must be established in order to predict the energy behavior of buildings in near-real-time. This is the goal of research being conducted by Balsam Ajib, a PhD student supervised by Sanda Lefteriu and Stéphane Lecoeuche of IMT Lille-Douai as well as by Antoine Caucheteux of Cerema.

 

A new approach for modeling thermal phenomena

State-of-the-art experimental models for evaluating energy performance of buildings use models with what are referred to as “linear” structures. This means that input variables for the model (weather, radiation, heating power etc.) are only linked to the output of this same model (temperature of a room etc.) through a linear equation. However, a number of phenomena which occur within a room, and therefore within a system, can temporarily disrupt its thermal equilibrium. For example, a large number of individuals inside a building will lead to a rise in temperature. The same is true when the sun shines on a building when its shutters are open.

Based on this observation, researchers propose using what is called a “commutation” model, which takes account of discrete events occurring at a given moment which influence the continuous behavior of the system being studied (change in temperature). “For a building, events like opening/closing windows or doors are commutations (0 or 1) which disrupt the dynamics of the system. But we can separate these actions from linear behavior in order to identify their impacts more clearly,” explains the researcher. To do so, she has developed several models, each of which correspond to a situation. “We estimate each configuration, for example a situation in which the door and windows are closed and heat is set at 20°C corresponds to one model. If we change the temperature to 22°C, we identify another and so on,” adds Sanda Lefteriu.

 

Objective: use these models for all types of buildings

To create these scenarios, researchers use real data collected inside buildings following measurement programs. Sensors were placed on the campus of IMT Lille-Douai and in passive houses which are part of the INCAS platform in Chambéry. These uninhabited residences offer a completely controlled site for experimenting since all the parameters related to the building (structure, materials) are known. These rare infrastructures make it possible to set up physical models, meaning models built according to the specific characteristics of the infrastructures being studied. “This information is rarely available so that’s why we are now working on mathematical modeling which is easier to implement,” explains Sanda Lefteriu.

We’re only at the feasibility phase but these models could be used to estimate heating power and therefore energy performance of buildings in real time,” adds the researcher. Applications will be put in place in social housing as part of the ShINE European project in which IMT Lille-Douai is taking part. The goal of this project is to reduce carbon emissions from housing.

These tools will be used for existing buildings. Once the models are operational, control algorithms installed on robots will be placed in the infrastructures. Finally, another series of tools will be used to link physical conditions with observations in order to focus new research. “We still have to identify which physical parameters change when we observe a new dynamic,” says Sanda Lefteriu. These models remain to be built, just like the buildings which they will directly serve.

 

 [1] Buildings currently represent 40-45% of energy spending in France across all sectors. Find out more+ about key energy figures in France.

 

This article is part of our dossier Digital technology and energy: inseparable transitions!

 

[box type=”shadow” align=”” class=”” width=””]

Energy performance of buildings:

The energy performance of a building includes its energy consumption and its impact in terms of greenhouse gas emissions. Consideration is given to the hot water supply system, heating, lighting and ventilation. Other building characteristics to be assessed include insulation, location and orientation. An energy performance certificate is a standardized way to measure how much energy is actually consumed or estimated to be consumed according to standard use of the infrastructure. [/box]

 

Nouvelle génération batterie lithium

Towards a new generation of lithium batteries?

The development of connected devices requires the miniaturization and integration of electronic components. Thierry Djenizian, a researcher at Mines Saint-Étienne, is working on new micrometric architectures for lithium batteries, which appear to be a very promising solution for powering smart devices. Following his talk at the IMT symposium devoted to energy in the context of the digital transition, he gives us an overview of his work and the challenges involved in his research.

Why is it necessary to develop a new generation of batteries?

Thierry Djenizian: First of all, it’s a matter of miniaturization. Since the 1970s the Moore law which predicts an increase in performance of microelectronic devices with their miniaturization has been upheld. But in the meantime, the energy aspect has not really kept up. We are now facing a problem: we can manufacture very sophisticated sub-micrometric components, but the energy sources we have to power them are not integrated in the circuits because they take up too much space. We are therefore trying to design micro-batteries which can be integrated within the circuits like other technological building blocks. They are highly anticipated for the development of connected devices, including a large number of wearable applications (smart textiles for example), medical devices, etc.

 

What difficulties have you encountered in miniaturizing these batteries?

TD: A battery is composed of three elements: two electrodes and an electrolyte separating them. In the case of micro-batteries, it is essentially the contact surface between the electrodes and the electrolyte that determines storage performances: the greater the surface, the better the performance. But in decreasing the size of batteries, and therefore, the electrodes and the electrolyte, there comes a point when the contact surface is too small and battery performance is decreased.

 

How do you go beyond this critical size without compromising performance?

TD: One solution is to transition from 2D geometry in which the two electrodes are thin layers separated by a third thin electrolyte layer, to a 3D structure. By using an architecture consisting of columns or tubes which are smaller than a micrometer, covered by the three components of the battery, we can significantly increase contact surfaces (see illustration below). We are currently able to produce this type of structure on the micrometric scale and we are working on reaching the nanometric scale by using titanium nanotubes.

 

On the left, a battery with a 2D structure. On the right, a battery with a 3D structure: the contact surface between the electrodes and the electrolyte has been significantly increased.

 

How do these new battery prototypes based on titanium nanotubes work?

TD: Let’s take a look at the example of a low battery. One of the electrodes is composed of lithium, nickel, manganese and oxygen. When you charge this battery by plugging it in, for example, the additional electrons set off an electrochemical reaction which frees the lithium from this electrode in the form of ions. The lithium ions migrate through the electrolyte and insert themselves into the nanotubes which make up the other electrode. When all the nanotube sites which can hold lithium have been filled, the battery is charged. During the discharging phase, a spontaneous electrochemical reaction is produced, freeing the lithium ions from the nanotubes toward the nickel-manganese-oxygen electrode thereby generating the desired current.

 

What is the lifetime of these batteries?

TD: When a battery is working, great structural modifications take place; the materials swell and shrink in size due to the reversible insertion of lithium ions. And I’m not talking about small variations in size: the size of an electrode can become eight times larger in the case of 2D batteries which use silicon! Nanotubes provide a way to reduce this phenomenon and therefore help prolong the lifetime of these batteries. In addition, we are also carrying out research on electrolytes based on self-repairing polymers. One of the consequences of this swelling is that the contact interfaces between the electrodes and the electrolyte are altered. With an electrolyte that repairs itself, the damage will be limited.

 

Do you have other ideas for improving these 3D-architecture batteries?  

TD: One of the key issues for microelectronic components is flexibility. Batteries are no exception to this rule, and we would like to make them stretchable in order to meet certain requirements. However, the new lithium batteries we are discussing here are not yet stretchable: they fracture when subjected to mechanical stress. We are working on making the structure stretchable by modifying the geometry of the electrodes. The idea is to have a spring-like behavior: coupled with a self-repairing electrolyte, after deformation, batteries return to their initial position without suffering irreversible damage. We have a patent pending for this type of innovation. This could represent a real solution for making autonomous electronic circuits both flexible and stretchable, in order to satisfy a number of applications, such as smart electronic textiles.

 

This article is part of our dossier Digital technology and energy: inseparable transitions!

 

 

cloud computing, Maurice Gagnaire

Cloud computing for longer smartphone battery life

How can we make our smartphone batteries last longer? For Maurice Gagnaire, a researcher at Télécom ParisTech, the solution could come through mobile cloud computing. If computations currently performed by our devices could be offloaded to local servers, their batteries would have to work less. This could extend the battery life for one charge by several hours. This solution was presented at the IMT symposium on April 28, which examined the role of the digital transition in the energy sector.

 

Woe is me! My battery is dead!” So goes the thinking of dismayed users everywhere when they see the battery icon on their mobile phones turn red. Admittedly, smartphone battery life is a rather sensitive subject. Rare are those who use their smartphones for the normal range of purposes — as a telephone, for web browsing, social media, streaming videos, etc. — whose batteries last longer than 24 hours. Extending battery life is a real challenge, especially in light of the emergence of 5G, which will open the door for new energy-intensive uses such as ultra HD streaming or virtual reality, not to mention the use of devices as data aggregators for the Internet of Things (IoT). The Next Generation Mobile Networks Alliance (NGMN) has issued a recommendation to extend mobile phone battery life to three days between charges.

There are two major approaches possible in order to achieve this objective: develop a new generation of batteries, or find a way for smartphones to consume less battery power. In the laboratories of Télécom ParisTech, Maurice Gagnaire, a researcher in the field of cloud computing and energy-efficient data networks, is exploring the second option. “Mobile devices consume a great amount of energy,” he says. “In addition to having to carry out all the computations for the applications being used, they are also constantly working on connectivity in the background, in order to determine which base station to connect to and the optimal speed for communication.” The solution being explored by Maurice Gagnaire and his team is based on reducing smartphones’ energy consumption for computations related to applications. The scientists started out by establishing a hierarchy of applications according to their energy demands as well as to their requirements in terms of response time. A tool used to convert an audio sequence into a written text, for example, does not present the same constraints as a virus detection tool or an online game.

Once they had carried out this first step, the researchers were ready to tackle the real issue — saving energy. To do so, they developed a mobile cloud computing solution in which the most frequently-used and energy-consuming software tools are supposed to be available in nearby servers, called cloudlets. Then, when a telephone has to carry out a computation for one of its applications, it offloads it to the cloudlet in order to conserve battery power. Two major tests determine whether to offload the computation. The first one is based on an energy assessment: how much battery life will be gained? This depends on the effective capacity of the radio interface at a particular location and time. The second test involves quality expectations for user experience: will use of the application be significantly impacted or not?

Together, these two tests form the basis for the MAO (Mobile Applications Offloading) algorithm developed by Telecom ParisTech. The difficulty in its development arose from its interdependence with such diverse aspects as hardware architecture of mobile phone circuitry, protocols used for radio interface, and factoring in user mobility. In the end, “the principle is similar to what you find in airports where you connect to a local server located at the Wi-Fi spot,” explains Maurice Gagnaire. But in the case of energy savings, the service is intended to be “universal” and not linked to a precise geographic area as is the case for airports. In addition to offering browsing or tourism services, cloudlets would host a duplicate of the most widely-used applications. When a telephone has low battery power, or when it is responding to great user demand for several applications, the MAO algorithm makes it possible to autonomously offload computations from the mobile device to cloudlets.

 

Extending battery life by several hours?

Through a collaboration with researchers from Arizona State University in Tempe (USA), the theoretical scenarios studied in Paris were implemented in real-life situations. The preliminary results show that for the most demanding applications, such as voice recognition, a 90% reduction in energy consumption can be obtained when the cloudlet is located at the base of the antenna in the center of the cell. The experiments underscored the great impact of the self-learning function of the database linked to the voice recognition tool on the performances of the MAO algorithm.

Extending the use of the MAO algorithm to a broad range of applications could expand the scale of the solution. In the future, Maurice Gagnaire plans to explore the externalization of certain tasks carried out by graphics processers (or GPUs) in charge of managing smartphones’ high-definition touchscreens. Mobile game developers should be particularly interested in this approach.

More generally, Maurice Gagnaire’s team now hopes to collaborate with a network operator or equipment manufacturer. A partnership would provide an opportunity to test the MAO algorithm and cloudlets on a real use case and therefore determine the large-scale benefits for users. It would also offer operators new perspectives for next-generation base stations, which will have to be created to accompany the development of 5G planned for 2020.

 

This article is part of our dossier Digital technology and energy: inseparable transitions!