sable, sand

Sand, an increasingly scarce resource that needs to be replaced

Humans are big consumers of sand, to the extent that this now valuable resource is becoming increasingly scarce. Being in such high demand, it is extracted in conditions that aren’t always respectful of the environment. With the increasing scarcity of sand and the sometimes devastating consequences of mining at beaches, it is becoming crucial to find alternatives. Isabelle Cojan and Nor-Edine Abriak, researchers in geoscience and geomaterials at Mines ParisTech and IMT Lille Douai respectively, explain the stakes involved with regards this resource.

 

“After air and water, sand is the third most essential resource for human beings” explains Isabelle Cojan, researcher in geoscience at Mines ParisTech. Sand is, of course, used indirectly by most of us, since the major part of this resource is consumed by the construction industry. 15 to 20 billion tons of sand are used every year in the construction of buildings and roads all over the world and in land reclamation. In comparison, the amount used in other applications, such as micro-computing or glass, detergent and cosmetics manufacturing, is around 0.2 billion tons. On an individual scale, global sand consumption stands at 18 kg per person per day.

At first glance, our planet could easily provide the enormous amount of sand required by humanity. A quarter of the surface of the Earth’s continents is covered by huge deserts of which nearly 20% of the surface is occupied by dune fields. The problem is that this aeolian sand is unusable for construction or reclamation: “The grains of desert sand are too smooth, too fine, and cannot bind with the cements to make concrete,” explains Isabelle Cojan. For other reasons, including economic ones, marine sands can only be exploited at shallow depths, as is currently the case in the North Sea. The volume of the Earth’s available sand is significantly smaller once these areas are removed from the equation.

Deserts are poor suppliers of construction sand. The United Arab Emirates is therefore paradoxically forced to import sand to support the development of its capital, Dubai, which is located just a stone’s throw from the immense desert of the Arabian Peninsula.

 

Deserts are poor suppliers of construction sand. The United Arab Emirates is therefore paradoxically forced to import sand to support the development of its capital, Dubai, which is located just a stone’s throw from the immense desert of the Arabian Peninsula.

So where do we find this precious sediment? Silica-rich sands, such as those found under the Fontainebleau forest, are reserved for the production of glass and silicon. On the other hand, the exploitation of fossil deposits is limited by the anthropization of certain regions, which makes it difficult to open quarries. For the construction industry, the sands of rivers and coastal deposits fed partly by river flow must be used. The grains of alluvium from beaches are angular enough to cling to cements. Although the quantities of sand on beaches may seem huge to the human eye, they are not really that big. In fact, most of the sediments in the alluvial plains of our rivers and coastlines are inherited from the large-scale erosion that occurred during the Quaternary Ice Age. Today, with building developments along riverbanks and less erosion, Isabelle Cojan points out that “we consume twice as much sand as the amount the rivers bring to the coastal regions.”

Dams not only retain water, they also prevent sediment from descending downstream. The researcher at Mines ParisTech points to the example of the watercourse of the Durance, a tributary of the Rhône that she studied: “Before development, it deposited 3 million tons of sediment in the Mediterranean every year. Today, this quantity is between 0.1 and 0.5 million tons.” Most of the sediment provided by the erosion of the watershed is trapped by infrastructure, and settles to the bottom of artificial water reservoirs.

The sand rush and its environmental impact

As a result of the lower release of sediment into the seas and oceans, some beaches are naturally shrinking, especially where the extraction industry takes sand from the coasts. Countries are thus in a critical situation where entire beaches are disappearing. This is the case in Togo, for example, and several other African countries where sand is extracted without too many legislative constraints. “The Comoros derives a lot of its income from tourism, and is extracting more and more sand from beaches to support its economic development because there is no significant reserve of sand elsewhere on the islands,” explains Isabelle Cojan. The extraction of large volumes of sand leads to coastal erosion and land receding to the sea. The situation is similar in other parts of the world. “Singapore has significantly increased its surface area by developing polders in the sea,” the researcher continues. “Nearby islands at the surface of the water providing an easy supply of sand disappeared before the regulations of the countries concerned prohibited such extraction.

In Europe, practices are more closely controlled. In France, in particular, extraction from riverbeds – a practice which dates back thousands of years – was carried out until the 1970s. The poorly regulated exploitation of alluvial deposits led to changes in river profiles, leading to the scouring of bridge anchorages, which then became fragile and sometimes even collapsed. Extraction from rivers has since been forbidden, and only deposits on alluvial plains that do not directly impact riverbeds may be used for extraction. On the coast, exploitation is regulated by the French Environmental Code and Mining Code, which prohibit any extraction that could directly or indirectly compromise the integrity of a beach.

However, despite this legislation, indirect adverse consequences are difficult to prevent. The impact of sand extraction in the medium term is complex for researchers to model. “In coastal areas, we have to take account of a set of complex processes linked to tides, storms, coastal drift, vegetation cover, tourism and port facilities lists Isabelle Cojan. In some cases, sustainable extraction entails no danger. In other situations, a slight change in the profile of the beach could have serious consequences. “This can lead to a significant retreat of the coastline during storms and flooding of the hinterland by marine waters, especially during equinox storms,” the researcher continues. On coastline that is undergoing natural erosion with low-relief hinterlands, sand extraction may, over time, lead to an irreversible destabilization of the coast.

The implications of the disappearance of beaches are not only aesthetic. Beaches and the dunes that very often lie along their edge constitute a natural sedimentary barrier against the onslaught of the waves. They are the primary and most important source of protection against erosion. Beaches limit, for example, the retreat of chalk cliffs. On coasts with low landforms, beach-dune systems form a barrier against the entry of the sea into the land. When water breaches the natural sediment barrier, salinization of fields can occur leading to a drastic change in farming conditions and can ruin arable land.

What are the alternatives to sand?

Faced with the scarcity of this resource, new avenues are being explored to find an alternative to sand. Recycled concrete, glass or metallurgical waste can be used to replace sediment in the composition of concrete. However, building materials produced in this way encounter performance problems: “They age very quickly and can release pollutants over time,” explains Isabelle Cojan. Another limitation is that these alternative options are currently not sufficient in volume. France produces 370 million tons of sand annually, whereas recycling only produces 20 million tons.

Also read on I’MTech Recycling concrete and sediment to create new materials

Major efforts to structure a dedicated recycling sector would be necessary, with all the economic and political debates at national and local levels that this implies. Since manufacturers want high-performance products, this could not be done until research has found a way to limit the aging and pollution of materials made from recycled materials. While recycling should not be ruled out, it is clear that the solution is only feasible over a relatively long time scale.

In the shorter term, another alternative could come from the use of other sand deposits currently considered as waste. Through pioneering work in geomaterials at IMT Lille Douai, Nor-Edine Abriak has demonstrated that it is possible to exploit dredging sand. This sediment comes from the bottom of rivers and streams, and is extracted for water course development. Dredging is mainly used to allow waterway navigation and large quantities of sand are extracted every year from ports and river mouths. “When I started my research on the subject a few years ago, the port of Dunkirk was very congested with sediments,” recalls the researcher. He joined forces with the local authorities to set up a research chair called Ecosed in order to find a way to use this sand.

For the construction industry, the major drawback of dredging sediments is their high content in clay particles. “Clay is a nightmare for people working with concrete,” warns Nor-Edine Abriak. “The particles can swell, increasing the setting time of the cement in the concrete and potentially diminishing the performance of the final material.” It is customary to use a sieve to separate the clay, which requires equipment and dedicated logistics, and therefore additional costs. For this reason, these sediments are rejected by industrial players. “The only way to be competitive with these sediments is to be able to use them as they are, without a separation process,” admits the leader of the Ecosed Chair. The research at IMT Lille Douai has led to a more convenient and rapid treatment process using lime that eliminates the need for sieving. This process also breaks down the organic matter in the sediments and improves the setting of the cement, making it possible to rapidly use the sand extracted from the bottom of the port of Dunkirk.

The Ecosed Chair has also provided a way to overcome another problem, that of the salinity of these shallow sands in contact with seawater. Salt corrodes materials, therefore shortening the useful life of concrete. To remove it, the researchers used a simple water wash. “We showed that the dredging sand could simply be stored in large lagoons and the salt would be drained by the rain,” explains Nor-Edine Abriak. The solution is a simple one, provided there is enough space nearby to build the lagoons, which is the case in the area around Dunkirk.

With these results, the team of scientists demonstrated that dredging sediments extracted from the bottom of ports could be used as an alternative to beach sediments, instead of being considered as waste. “We were the first in the world to prove this was possible,” Nor-Edine Abriak says proudly. This solution does not provide a full replacement, as dredged sediments have different mechanical properties that must be taken into account in order not to affect the durability of the materials. The first scaling tests showed that dredging sands could be used in proportions of up to 70% in the composition of materials for roads and 12% for buildings, with no loss of quality for the end material.

Because almost all ports have to carry out dredging, this research offers a major opportunity to reduce sand extraction from beaches and alluvium. In early March, the Ecosed team went to Morocco to launch a second chair on waste recovery and dredging sands in particular: “the same situation as in Dunkirk can be seen in Tangier and Agadir,” explains the researcher, highlighting the global nature of the problem. In June 2019, the Ecosed Chair in France became Ecosed Digital 4.0, going from a budget of €2 million to €24 million with the aim of structuring a specific sector for the recovery of dredged sediments in France. While this work alone will not fully solve the problem of sand scarcity, it will nevertheless start an impetus to reduce sand extraction in areas where such mining is threatening. It must also be ensured that this type of initiative is scaled up, both nationally and internationally.

flood water level

How can AI help better forecast high and low water levels?

Predicting the level of water in rivers or streams can prove to be invaluable in areas with a high risk of flooding or droughts. While traditional models are based primarily on hypotheses about processes, another approach is emerging: artificial neural networks. Anne Johannet, an environmental engineering researcher at IMT Mines Alès, uses this approach.

This article is part of our dossier “Far from fantasy: the AI technologies which really affect us.

In hydrology, there are two especially important notions: high and low water levels. The first describes a period in which the flow of a watercourse is especially high, while the second refers to a significantly low flow. These variations in water level can have serious consequences. A high water level can, for example, lead to flooding (although this is not systematic), while a low water level can lead to restrictions on water abstraction, in particular for agriculture, and can harm aquatic ecosystems.

Based on past experience, it is possible to anticipate which watercourses tend to rise to a certain level in the event of heavy precipitation. This approach may obtain satisfactory results but clearly  lacks precision. This is why Flood Forecasting Services (SPC) also rely on one of two types of models. The first is called “reservoir modeling”: it treats a drainage basin like a reservoir, which overflows when the water content exceeds its filling capacity. But forecasts made based on this type of model may contain major errors, since they do not usually take into account soil heterogeneity or variability of drainage basin use.

The other approach is based on a physical model. The behavior of a studied watercourse is simulated using differential equations and field measurements. This type of model is therefore meant to take all the data into account in order to provide reliable predictions. However, it reaches its limits when faced with high variability, as is often the case: how land reacts to precipitation may depend on human activity, type of agriculture, seasons, existing vegetation etc. As a result, “it is very difficult to determine the initial state of a watercourse,” says Anne Johannet, an environmental engineering researcher at IMT Mines Alès. “It is the major unknown variable in hydrology, along with the unpredictability of rainfall.”  Therefore, the reality may ultimately conflict with forecasts, as was the case with the exceptional rising of the Seine in 2016. Moreover, certain drainage basins are little addressed by physical models due to their complexity. The Cévannes is one such example.

Neural networks learn independently

Anne Johannet’s research focuses on another approach which offers a new method for forecasting water flow: artificial intelligence. “The benefit of neural networks is that they can learn a function from examples, even if we don’t know this function”, explains the researcher.

Neural networks learn in a similar way as children do. They start out with little information and study a set of initial data, by calculating the output in a random way and inevitably making mistakes. Then, numerical analysis methods make it possible to gradually improve the model in order to reduce these errors.  In concrete terms, in hydrology, the objective of neural networks is to forecast the flow a watercourse or its water level based on rainfall. A dataset describing all the past observations about the basin is therefore used to train the model. While it is learning, the neural network calculates a flow based on precipitation, and this result is compared to real measurements. This process is then repeated several times, to correct its mistakes.

A pitfall to be avoided with this approach is that of “overlearning.”  If a neural network is “overtrained,” it can eventually lose its extrapolation quality and settle for knowing something “by heart.” To give an example, if the neural network integrates the appearance of a major rise in water level on 15 November 2002, overlearning can lead it deduce that such an event will occur every year on 15 November. To avoid this phenomenon, the dataset used to train the network is divided into two subsets: one for learning and one for validation. And as the errors are corrected on the training dataset its ability to generalize is verified using the test dataset.

The main benefit of this neural network approach is that it requires much less input data. A physical model requires a large amount of data, about the nature of the land, vegetation, slope etc. A neural network, on the other hand, “only needs the rainfall and flow at a location we’re interested in, which facilitates its implementation,”  says Anne Johannet. This leads to lower costs and provides quicker results. However, the success of such an approach relies heavily on rainfall predictions, which are used as input variables. And this precipitation remains difficult to forecast.

Clear advantages but a controversial approach

Today, Anne Johannet’s models are already used by public services including the Artois-Picardie Flood Prevention Service (in the Hauts-de-France region). Based on rainfall prediction, agents establish scenarios and study the consequences using neural networks. Depending on the type of basin — which may react more or less quickly — they are also able to make forecasts several hours or even a day in advance for high water levels, and several weeks in advance for low water levels.

This data can therefore have a direct effect on local authorities and citizens.  For example, predicting a significant low water period could lead water supply management to switch to an alternative water source, or could lead the authorities to prohibit water abstraction.  Predicting  a high water period, on the other hand, could help anticipate potential flooding, based on the land structure.

Longer-term projections can also be established using data from the IPCC (Intergovernmental Panel on Climate Change). Trial forecasts of the flow of the Albarine river in the Ain department have been carried out, up to 2070, to assess the impact of global warming. The results indicate severe low-water levels in the future, which could affect land-use planning and farming activity.

However, despite these results, the artificial intelligence approach for predicting high and low water levels has been met with mistrust by many hydrologists, especially in France. They argue that these systems are incapable of generalizing due to climate change, and largely prefer physical or reservoir models. The IMT Mines Alès researcher rejects these accusations, underscoring the rigorous validation of neural networks. She suggests that the results from the different methods should be viewed alongside one another, evoking the words of statistician George Box: “All models are wrong, but some are useful.”

Article written for I’MTech by Bastien Contreras

Indoor air

Indoor Air: under-estimated pollutants

While some sources of indoor air pollution are well known, there are others that researchers do not yet fully understand. This is the case for cleaning products and essential oils. The volatile organic compounds (VOCs) they become and their dynamics within buildings are being studied by chemists at IMT Lille Douai.

When it comes to air quality, staying indoors does not keep us safe from pollution. “In addition to outdoor pollutants, which enter buildings, there are the added pollutants from the indoor environment! A wide variety of volatile organic compounds are emitted by building materials, paint and even furniture,” explains Marie Verriele Duncianu, researcher in atmospheric chemistry at IMT Lille Douai. Compressed wood combined with resin, which is often used to make indoor furniture, is one of the leading sources of formaldehyde. In fact, indoor air is generally more polluted than outdoor air. This observation is not new, it has been the focus of numerous information campaigns by environmental agencies, including ADEME and the OQAI, the monitoring center for the quality of indoor air. However, the recent results of much academic research tend to show that the sources of indoor pollutants are still underestimated, and the emissions are poorly known.

In addition to sources from construction and interior design, many compounds are emitted by the occupants’ activities,” the researcher explains. Little research has been conducted on sources of volatile organic compounds such as cleaning products, cooking activities, and hygiene and personal care products. Unlike their counterparts produced by furniture and building materials, these pollutants originating from resident’s products are much more dynamic. While a wall constantly emits small quantiles of VOCs, a cleaning product spontaneously emits a quantity up to ten times more concentrated. This rapid emission makes the task of measuring the concentrations and defining the sources much more complex.

Since they are not as well known, these pollutants linked to users are also less controlled. “They are not taken into account in regulations at all,” explains Marie Verriele Duncianu. “The only legislation related to this issue is legislation for nursery schools and schools, and legislation requiring a label for construction materials.” Since 1st January 2018, institutions receiving children and young people are required to monitor the concentrations of formaldehyde and benzene in their indoor air. However, no actions have been imposed regarding the sources of these pollutants. Meanwhile, ADEME has issued a series of recommendations that advocate the use of green cleaning products for cleaning floors and buildings.

The green product paradox

These recommendations come at a time when consumers are becoming increasingly responsible in terms of their purchases, including for cleaning products. Certain cleaning products benefit from an Ecolabel, for example, guaranteeing a smaller environmental footprint. However, the impacts of these environmentally friendly products in terms of pollutant emissions has not been studied any more than it has for their label-free counterparts. Supported by marketing arguments alone, products featuring essential oils are being hailed as beneficial, without any evidence to back them up. Simply put, Researchers do not yet have a good understanding of indoor pollution, traditional cleaning products or those presented as green products. However, it is fairly easy to find false information claiming the opposite.

In fact, it was upon observing received ideas and “miracle” properties on consumer websites that Marie Verriele Duncianu decided to start a new project called ESSENTIEL.  “My fellow researchers and I saw statements claiming that essential oils purified the indoor air,” the researcher recalls. “On some blogs, we even read consumer testimonials of how essential oils eliminate pollutants. It’s not true: while they do have the ability to clean the environment in terms of bacteria, they definitely do not eliminate all air pollutants. On the contrary, they add more!”

In the laboratory, the researchers are studying the behavior of products featuring essential oils. What VOCs do they release? How are they distributed in indoor air?

 

Essential oils are in fact high in terpenes. These molecules are allergenic, particularly for the skin. They can also interact with ozone to form fine particles or formaldehyde. In focusing on essential oils and the molecules they release into the air; the ESSENTIAL project wants to help remedy this lack of knowledge about indoor pollutants. Therefore, the researchers are pursuing two objectives: understand how emissions from essential oil volatile organic compounds behave, and determine the risks related to these emissions.

The initial results show unusual emission dynamics. For floor cleaners, “there is a peak concentration of terpenes during the first half-hour following use,” explains Shadia Angulo Milhem, PhD student participating in the project with Marie Verriele Duncianu’s team. “Furthermore, the concentration of formaldehyde begins to regularly increase four hours after the cleaning activity.” Formaldehyde is a very controlled substance because it is an irritant and is carcinogenic in cases of high and repeated exposure. The concentrations measured up to several hours after the use of the cleaning products containing essential oils can be attributed to two factors. First of all, terpenes react with the ozone to create formaldehyde. Secondly, the decomposition of formaldehyde donors, used as preservatives, and biocide contained in the cleaning products.

A move towards regulatory thresholds?

In the framework of the ESSENTIAL project, researchers have not only measured cleaning products containing essential oils. They also studied diffusion devices for essential oils. The results show characteristic emissions for each device. “Reed diffusers, which are small bottles containing wooden sticks, take several hours to reach full capacity” Shadia Angulo Milhem explains. “The terpene concentrations then stabilize and remain constant for several days.” Vaporizing devices, on the other hand, which heat the oils, have a more spontaneous emission, resulting in terpene concentrations that are less permanent in the home.

In addition to the measurements of the concentrations, the dynamics of the volatile organic compounds that are released is difficult to determine. In some buildings, they can be trapped in porous materials, then released later due to changes in humidity and temperature. One of the areas the researchers want to explore in the future is how they are absorbed by indoor surfaces. Understanding the behavior of pollutants is essential in establishing the risks they present. How dangerous a compound is depends on whether it is dispersed quickly in the air or accumulates for several days in paint or in drop ceilings.

Currently, there are no regulatory thresholds for terpene concentrations in the air, due to a lack of knowledge about the public’s exposure and about long and short-term toxicity. We must keep in mind that the risk associated with exposure to a pollutant depends on the toxicity of the compound, its concentration in the air and the duration of contact. Upon completion of the ESSENTIAL project, anticipated for 2020, the project team will provide ADEME with a technical and scientific report. While waiting for legislation to be introduced, the results should at least offer recommendation sheets on the use of products containing essential oils. This will provide consumers with real information regarding the benefits as well as the potentially harmful effects of the products they purchase, a far cry from pseudo-scientific marketing arguments.

The water footprint of a product has long been difficult to evaluate. Where does the water come from? What technology is used to process and transport it? These are among the questions researchers have to answer in better measuring environmental impact.

The many layers of our environmental impact

An activity can have many consequences for the environment, from its carbon footprint, water consumption, pollution, changes to biodiversity, etc. Our impacts are so complex that an entire field of research has been developed to evaluate and compare them. At IMT Mines Alès, a team of researchers is working on tools to improve the way we measure our impacts and therefore provide as accurate a picture as possible of our environmental footprint. Miguel Lopez-Ferber, the lead researcher of this team, presents some of the most important research questions in improving our methods of environmental evaluation. He also explains the difficulty in setting up indicators and having them approved for efficient decision making.

 

Can we precisely evaluate all of the impacts of a product on the environment?

Miguel Lopez-Ferber: We do know how to measure some things. A carbon footprint, or the pollution generated by a product or a service. The use of phytosanitary products is another impact we know how to measure. However, some things are more difficult to measure. The impacts linked to the water consumption required in the production of a product have been extremely difficult to evaluate. For a given use, one liter of water taken from a region may generate very different impacts from a liter of water taken from another region. The type of water, the climate, and even the source of the electricity used to extract, transport and process it will be different. We now know how to do this better, but not yet perfectly. We also have trouble measuring the impact on biodiversity due to humans’ development of a territory.

Is it a problem that we cannot fully measure our impact?

MLF: If we don’t take all impacts into account, we risk not noticing the really important ones. Take a bottle of fruit juice, for example. If we only look at the carbon footprint, we will choose a juice made from locally-grown fruit, or one from a neighboring country. Transport does play a major part in a carbon footprint. However, local production may use a water source which is under greater stress than one in a country further away. Perhaps it also has a higher impact on biodiversity. We can have a distorted view of reality.

What makes evaluating the water footprint of a product difficult?

MLF: What is difficult is to first differentiate the different types of water. You have to know where the water comes from. The impact won’t be the same for water taken from a reserve under the Sahara as for water from the Rhône. The scarcity of the water must be evaluated for each production site. Another sensitive point is understanding the associated effects. In a given region, the mix of water used may correspond to 60% surface water, 30% river water and 10% underground water, but these figures do not give us the environmental impacts. Each source then has to be analyzed to determine whether taking the water has consequences, such as drying out a reserve. We also need to be able to differentiate the various uses of the water in a given region, as well as the associated socio-economic conditions, which have a significant impact on the choice of technology used in transporting and processing the water.

What can we determine in the impact of water use?

MLF: Susana Leão’s thesis, co-supervised by my colleague Guillaume Junqua, has provided a regional view of inventories. It presents the origin of the water in each region according to the various household, agricultural or industrial uses, along with the associated technologies. Before, we only had average origin data by continent: I had the average water consumption for one kilogram of steel produced in Europe, without knowing if the water came from a river or from a desalination process, for example. Things became more complicated when we looked at the regional details. We now know how to differentiate the composition of one country’s water mix from another’s, and even to differentiate between the major hydrographic basins. Depending on the data available, we can also focus on a smaller area.

In concrete terms, how does this work contribute to studying impacts?

MLF: As a result, we can differentiate between production sites in different locations. Each type of water on each site will have different impacts, and we are able to take this into account. In addition, in analyzing a product like our bottle of fruit juice, we can categorize the impacts into those which are introduced on the consumption site, in transport or waste, for example, and those which are due to production and packaging. In terms of life cycle analysis, this helps us to understand the consequences of an activity on its own territory as well as other territories, near or far.

Speaking of territories, your work also looks at habitat fragmentation, what does this mean?

MLF: When you develop a business, you need a space to build a factory. You develop roads and transform the territory. These changes disturb ecosystems. For instance, we found that modifications made to a particular surface area may have very different impacts. For example, if you simply decrease the surface area of a habitat without splitting it, the species are not separated. On the contrary, if you fragment the area, species have trouble traveling between the different habitats and become isolated. We are therefore working on methods for evaluating the distribution of species and their ability to interconnect across different fragments of habitat.

With the increasing amount of impact indicators, how do we take all of these footprints into account?

MLF: It’s very complicated. When a life cycle analysis of a product such as a computer is made, this includes a report containing around twenty impact categories: climate change, pollution, heavy metal leaching, radioactivity, water consumption, eutrophication of aquatic environments, etc. However, decision-makers would rather see fewer parameters, so they need to be aggregated into categories. There are essentially three categories: impact on human health, impact on ecosystems, and overuse of resources. Then, the decisions are made.

How is it possible to decide between such important categories?

MLF: Impact reports always raise the question of what decision makers want to prioritize. Do they want a product or service which minimizes energy consumption? Waste production? Use of water resources? Aggregation methods are already based on value scales and strong hypotheses, meaning that the final decision is too. There is no way of setting a universal scale, as the underpinning values are not universal. The weighting of the different impacts will depend on the convictions of a particular decision maker and the geographical location. The work involves more than just traditional engineering, but a sociological aspect too. This is when arbitration enters the realm of politics.

recycling

“We must work now to recycle composites”

High-performance composite materials are used in cutting-edge sectors such as energy, aerospace and defense. The majority of these parts have not yet reached the end-of-life stage, but recycling them remains a medium-term issue that must be considered now in order to offer technically efficient and economically viable solutions when the time comes. The issue is one that Marie-France Lacrampe, a researcher in plastics materials and processes, is working on at IMT Lille Douai. She presents the processes scientists are currently studying for recycling composites and explains why efforts in this area must start increasing today.

 

Are all composite materials recyclable?

Marie-France Lacrampe: In theory, they are recyclable: we can always find something to do with them. The important question is, will the solution we find be a useful one? If so, will it be economically viable? In this respect, we must distinguish between composites according to the nature of their polymer matrix, their reinforcing fibers and the dimensions of these fibers.  Recycling possibilities for glass fiber composites are not the same as those for carbon fiber composites.

Read more on I’MTech: What is a composite material?

Glass fiber composites are among the most common. What can be done with these materials at the end of life?

MFL: Glass-fiber-reinforced polymers now represent a significant source of potential products to recycle. Annual global production currently represents millions of tons. Most of these materials use small, cut fibers. These are non-structural composites that can be seen as fiber-filled thermoplastic or thermosetting polymers. The ratio between the cost of recycling these materials and the value of the recycled product is not very advantageous. Currently, the most reasonable solution would be to incinerate them to recover thermal energy for various industrial applications. Nevertheless, in some specific cases, mechanical recycling is possible: the materials can be ground and integrated into a polymer matrix. This offers valuable uses that justify the recycling costs. For example, this method is being explored as one of the components of the Interreg Recy Composite* project that we are participating in.

What functionality does this type of approach enhance?

MFL: In our case, we grind automotive parts made with glass fibers found under the engine hood. The ground material is used to develop intumescent systems, which swell when exposed to heat. These intumescent systems represent a strategy for passively protecting a material from fire. The intumescence leads to a crust forming that slowly conducts heat to the material’s surface, thus diminishing its deterioration and reducing the gases feeding the flame. These systems are generally expensive and integrating the ground materials helps reduce production costs. The proposed formulations made from recycled glass fiber composites can compete with existing formulations in terms of their fire behavior. The ongoing research seeks to develop other characteristics, including mechanical ones. The results are encouraging and increase the value of the recycled materials. However, this does not offer a solution for absorbing all the potential sources of glass fiber composite materials. As it stands, energy recovery remains the only economically viable solution.

What about other composites, such as those with carbon fibers used for high-performance applications?

MFL: Carbon-fiber composites offer much more valuable potential for use after recycling. Production volumes are currently lower, but worldwide production is significantly growing. Recycling solutions for these materials exist, but they are currently limited to manufacturing waste for the most part. In certain cases, the pyrolysis of these composites makes it possible to once again obtain long carbon fibers and architectured reinforcements that can be used instead of new fibers. The disadvantage is that the polymer matrix is burned in the process and cannot be used. Other solutions are currently being studied, including solvolysis methods.

What is solvolysis?

MFL: It involves selectively dissolving the components of a composite to recover them. In the case of thermoplastic polymer matrices this process, while not easy, is technically feasible. In the case of thermosetting polymer matrices, selectively dissolving the polymer matrix without damaging the fibers is more complicated and requires specific equipment and protocols. This aspect is also being addressed in the Recy-Composite project. The initial results reveal the feasibility of this process. The recovered carbon reinforcement is of good quality and could be reintroduced to create a new composite with satisfactory properties. There are still many obstacles to overcome, including identifying solvents that could achieve the objective without creating any major health or safety problems.

Are there recycling issues for other types of composites?

MFL: Without being exhaustive, there is a new type of composite material that will someday need to be recycled: composites that use natural fibers. They offer very interesting properties, including from an environmental perspective. The problem is that the end-of-life processing of these materials is not yet well understood. For now, only mechanical recycling has been considered and it is already posing technical problems. The plant reinforcements used in these materials are susceptible to aging and are more temperature and shear sensitive. Grinding, reprocessing and reintegrating these components into a new composite material results in significant decreases in mechanical performance. A potential solution currently being assessed as part of the Recy-Composite project involves an original compounding process that can lower the temperatures. The initial results confirm this technology’s potential, but they must be complemented to ensure a higher level of performance.

Read more on I’MTech: Flax and hemp among tomorrow’s high-performance composite materials

In general, does the low volume of composite materials pose any problems in developing a recycling system?

MFL: Yes, because the biggest problem is currently the volume of the composite materials sources available for recycling. Until we can obtain a more constant and homogeneous inflow of the composites, it will be difficult to recycle them. Yet, one of the main advantages of structural composites is that, as primary construction materials, they are designed on a case-by-case basis according to the application. This explains the great variety of materials to be processed, the small volumes and why recycling solutions must be adapted case by case.

Is there cause for optimism regarding our ability to establish recycling systems despite this case-by-case issue?

MFL: The markets are rapidly evolving. Many applications are being developed for which the recycling costs can be compensated by gains in raw materials, without adversely affecting performance. Composites are increasingly used for structural parts, which naturally leads to an increase in volume of the potential sources of composites to recycle. The location of these future sources is fairly well known: in areas involving aircraft, wind turbines and major infrastructures. We also know the types of materials they contain. In these cases, the dismantling, collection and treatment circuits will be easy to create and adapt. The major challenge will be handling common, diffuse waste that is not well identified. Yet, even with lower volumes compared to other materials, it will still be possible to organize profitable systems.

These situations will not arise until a few years from now. Why is it important to study this topic already?

MFL: These systems will only be profitable if technical solutions to the problems have been validated beforehand. Excuses such as “it’s not profitable today”, “the systems do not exist” or “the inflow is too insignificant,” must not prevent us from seeking solutions. Otherwise, once the volumes become truly significant and the environmental constraints become extreme, we will be without technical solutions and systems. We will not have made any progress and the only proposed solution will be: “we must stop producing composite materials!” The volumes do not yet exist, but we can predict and anticipate them, design logistics to be implemented and at the same time prepare for the scientific and technical work that remains to be done.

*The Interreg V France Wallonie Flandres RECY-COMPOSITE project, supported by the European Union and the Walloon Region is jointly led by Certech, VKC, CTP, CREPIM, ARMINES and IMT Lille Douai.

 

 

CUBAIR : un prototype pour purifier l’air intérieur

CUBAIR: a prototype for purifying indoor air

Improving indoor air quality? That’s what the CUBAIR project aims to do. By developing a new treatment system, researchers have managed to significantly reduce fine particle concentration and nitrogen oxides.

 

An important reminder: indoor air is often more polluted than outdoor air. In addition to the automobile exhaust and industrial pollution that enter our homes and offices through the windows, molds and pollutants also come from building materials or cleaning products. What can we do to make the air we breathe inside our homes and offices healthier? That is the big question for researchers working on the CUBAIR project funded by ADEME.

For four years, the group of researchers from Cerema, IMT Atlantique and LaTep (laboratory of Université de Pau et des pays de l’Adour) have been developing a prototype for an air purification system. The air is cleaned through a 3-step process. First, the air taken into the system is filtered by activated carbons with different characteristics. These materials are able to capture organic compounds present in the air —pesticides are one such example. As the air leaves the system, it goes through a more traditional filtering stage to eliminate fine particles. The last step is a photocatalysis stage. When exposed to ultraviolet light, titanium dioxide molecules react with some of the pollutants that remain in the air.

Last year, this prototype was tested at the Human Resource Development Centre in Paris. The goal was to study how effective it was in real conditions throughout an entire year. The device’s performance was measured for different kinds of pollutants: volatile organic compounds, fine particles, mold etc. The results were especially promising for nitrogen oxides— particularly nitrogen dioxide, a major air pollutant— since the treatment system reduces their concentration by 60% in the treated air. Positive results were also observed for fine particles, with the concentration dropping by 75% for particles with diameters less than 1 micron.

The only drawbacks: volatile organic compounds are not eliminated as effectively and the system tends to heat up during use which leads to extra air conditioning costs in summer. The researchers noted, however, that this can be an advantage in cooler weather and that this inconvenience should be weighed against the significantly improved air quality in a room.

Overall, the CUBAIR project offers good prospects for breathing healthier air in our future buildings. Figures published by the World Health Organization in 2018 serve as a reminder that air pollution causes 7 million premature deaths worldwide every year. This pollution also represents an annual cost of approximately €20 billion in France. Combating this pollution is therefore a major health, environmental and economic issue.

Also read on I’MTech:

fine particles

Fine particles: how can their impact on health be better assessed?

In order to assess the danger posed by fine particles in ambient air, it is crucial to do more than simply take regulatory measurements of their mass in the air. The diversity of their chemical composition means that different toxicological impacts are possible for an equal mass. Chemists at IMT Lille Douai are working on understanding the physicochemical properties of the fine particle components responsible for their adverse biological effects on health. They are developing a new method to indicate health effects, based on measuring the oxidizing potential of these pollutants in order to better identify those which pose risks to our health.

 

The smaller they are, the greater their danger. That is the rule of thumb to sum up the toxicity of the various types of particles present in the atmosphere. This is based on the ease with which the smallest particles penetrate deep into our lungs and get trapped there. While the size of particles clearly plays a major role in how dangerous they are, the impact of their chemical composition must not be understated. For an equal mass of fine particles in the air, those we breath in Paris are not the same as the ones we breathe in Dunkirk or Grenoble, due to the different nature of the sources which produce them.  And even within the same city the particles we inhale vary greatly depending on where we are located in relation to a road or a factory.

Fine particles are very diverse: they contain hundreds, or even thousands of chemical compounds,” say Laurent Alleman and Esperanza Perdrix, researchers in atmospheric pollution in the department of atmospheric sciences and environmental engineering at IMT Lille Douai. Carboxylic acid, polycyclic aromatic hydrocarbons are just some of the many examples of molecules found in particles in higher or lower proportions. A great number of metals and metalloids can be added to this organic cocktail: copper, iron, arsenic etc., as well as carbon black. The final composition of a fine particle therefore depends on its proximity to sources of each of these ingredients. Copper and antimony, for example, are commonly found in particles near roads, produced by cars when braking, while nickel and lanthanum are typical of fine particles produced from petrochemistry.

Read more on I’MTech: What are fine particles?

Today, only the mass concentration as a function of certain sizes of particles in the air is considered in establishing thresholds for warning the population. For Laurent Alleman and Esperanza Perdrix, it is important to go beyond mass and size to better understand and prevent the health impacts of particles based on their chemical properties.  Each molecule, each chemical species present in a particle has a different toxicity. “When they penetrate our lungs, fine particles break down and release these components,” explains Laurent Alleman. “Depending on their physicochemical properties, these exogenous agents will have a more or less serious aggressive effect on the cells that make up our respiratory system.”

Measuring particles’ oxidizing potential

This aggression mainly takes the form of oxidation chemical reactions in cells: this is oxidative stress. This effect induces deterioration of biological tissue and inflammation, which can lead to different pathological conditions, whether in the respiratory system — asthma, chronic obstructive pulmonary diseases — or throughout the body. Since the chemical components and molecules produced by these stressed cells enter the bloodstream, they also create oxidative stress elsewhere in the body. “That’s why fine particles are also responsible for cardiovascular diseases such as cardiac rhythm disorders,” says Esperanza Perdrix. When it becomes too severe and chronic, oxidative stress can have mutagenic effects by altering DNA and can promote cancer.

For researchers, the scientific challenge is therefore to better assess a fine particle’s ability to cause oxidative stress. At IMT Lille Douai, the approach is to measure this ability in test tubes by determining the resulting production of oxidizing molecules for a specific type of particle. “We don’t directly measure the oxidative stress produced at the cellular level, but rather the fine particle’s potential to cause this stress,” explains Laurent Alleman. As such, the method is less expensive and quicker than a study in a biological environment. Most importantly, “Unlike tests on biological cells, measuring particles’ oxidizing potential is quick and can be automated, while giving us a good enough indication of the oxidative stress that would be produced in the body,” says Esperanza Perdrix. A winning combination, which would make it possible to make oxidizing potential a reference base for the analysis and ongoing, large-scale prevention of the toxicity of fine particles.

To measure the toxicity of fine particles, researchers are finding alternatives to biological analysis.

 

This approach has already allowed the IMT Lille Douai team to measure the harmfulness of metals. They have found that copper and iron are the chemical elements with the highest oxidizing potential. “Iron reacts with the hydrogen peroxide in the body to produce what we call free radicals: highly reactive chemical species with short lifespans, but very strong oxidizing potential,” explains Laurent Alleman. If the iron provided by the fine particles is not counterbalanced by an antioxidant — such as vitamin C — the radicals formed can break molecular bonds and damage cells.

Researchers caution, however, that, “Measuring oxidizing potential is not a unified method; it’s still in the developmental stages.” It is based on the principle of bringing together the component whose oxidizing potential is to be assessed with an antioxidant, and then measuring the quantity or rate of antioxidant consumed. In order for oxidizing potential to become a reference method, it still has to be become more popular among the scientific community, demonstrate its ability to accurately assess biological oxidative stress produced in vivo, and be standardized.

So for now, the mass concentration of fine particles remains the preferred method. Nevertheless, a growing number of studies are being carried out with the aim of taking account of chemical composition and health aspects. This is reflected in the many disciplines involved in this research. “Toxicological issues bring together a wide variety of fields such as chemistry, physics, biology, medicine, bioinformatics and risk analysis, to name just a few,” says Esperanza Perdrix, who also cites communities other than those with scientific expertise. “This topic extends beyond our disciplinary fields and must also involve environmental groups, citizens, elected officials and others,” she adds. 

Research is ongoing at the international level as well, in particular through MISTRALS, a large-scale meta-program led by CNRS, launched in 2010 for a ten-year period. One of its programs, called ChArMEx, aims to study pollution phenomena in the Mediterranean basin. “Through this program, we’re developing international collaboration to improve methods for measuring oxidizing potential,” explains Laurent Alleman. “We plan to develop an automated tool for measuring oxidizing potential over the next few years, by working together with a number of other countries, especially those in the Mediterranean region such as Crete, Lebanon, Egypt, Turkey etc.”

 

Also read on I’MTech:

[divider style=”normal” top=”20″ bottom=”20″]

A MOOC to learn all about air pollution

On October 8, IMT launched a MOOC dedicated to air quality, drawing on the expertise of IMT Lille Douai. It presents the main air pollutants and their origin, whether man-made or natural. The MOOC will also provide an overview of the health-related, environmental and economic impacts of air pollution.

[divider style=”normal” top=”20″ bottom=”20″]

atmosphere

What’s new in the atmosphere?

In conjunction with the 4th National Conference on Air Quality held in Montrouge on 9 and 10 October 2018, I’MTech sat down with François Mathé, a researcher in atmospheric sciences at IMT Lille Douai to ask him five questions. He gave us a glimpse of the major changes ahead in terms of measuring and monitoring air pollutants. Between revising the ATMO index and technical challenges, he explains the role scientists play in what is one of today’s major public health and environmental challenges.

 

The ATMO index, which represents in France air quality with a number ranging from 1 (very good) to 10 (very bad), is going to be revised. What is the purpose of this change?

François Mathé: The concept of an index to represent outdoor ambient air quality is that it is an indicator that provides a daily report on the state of the atmosphere in a clear, easily-accessible way for people who live in cities with over 100,000 residents. The ATMO index is based on measured concentrations of pollutants which are representative of their origins: ozone (O3), particulate matter (PM10), nitrogen dioxide (NO2), and sulfur dioxide (SO2). A sub-index is calculated for each of these chemical species, which is determined daily based on average pollution levels considered for specific stations — those which are representative of ambient pollution, or “background pollution”. The highest sub-index corresponds to the ATMO index. The higher the value, the lower the air quality. The problem is that this approach doesn’t take into account proximity phenomena such as vehicle or industrial emissions, or the cocktail effect — if the four pollutants all have a sub-index of 6, the ATMO index will be lower than if three of them have a sub-index of 1 and the fourth has a sub-index of 8. Yet the cocktail effect can have impacts on health, whether short or long-term. This is one of the reasons for the index revision planned in the near future, to better report on the state of the atmosphere, while updating the list of pollutants taken into consideration and making our national index consistent with those used by our European neighbors.

Why does the list of pollutants considered have to be updated?

FM: Sulfur dioxide (SO2) and carbon monoxide (CO) are chemical compounds that were in the spotlight for a long time. Although their toxicity is a real issue, these pollutants are now associated with very specific, clearly-defined situations, such as industrial sites or underground parking lots. At the national level, it is no longer appropriate to take them into account. Conversely, new species are emerging which are worth highlighting on a national scale. In June, the ANSES published a notice on non-regulated air pollutants that should be taken into consideration in monitoring air quality. The list includes pollutants such as 1.3-butadiene, ultrafine particles (UFP), soot carbon, and others. In France, we also have a very specific problem: plant protection products, i.e. pesticides. The ANSES has established a list of over 90 of these types of products which are currently being assessed through a year-long project covering the entire French territory. As a result, all of these ‘new pollutants’ require to re-examine how air quality is presented to citizens. In addition, we could mention pollens which are often ‘marginalized’ when it comes to monitoring air quality in France.

Against this backdrop of changing the way air quality is assessed and represented, what role do researchers play?

FM: Behind these notions of measuring, monitoring and representing air quality there are regulations, at both national and European level. And regulations imply technical standards and guidelines for the organizational aspect. That’s where the central laboratory for air quality monitoring (LCSQA) comes in. It serves as the central scientific reference body, bringing together IMT Lille Douai, Ineris, and LNE (the national laboratory of metrology and testing). By pooling different skills, this body acts as a foundation of expertise. It is responsible for tasks such as validating technical documents establishing methodologies to apply for pollutants measurement, setting requirements for using devices, verifying the technical compliance of the instruments themselves, etc. For example, we conduct tests on sensors in the laboratory and in real conditions to assess their performance and make sure that they are able to measure correctly, compared to reference instruments.

Where do the regulations and guidelines you use in your work come from?

FM: Mainly from European directives. The first regulations date back to the 1980s and the current texts in force, establishing thresholds which must not be exceeded and measuring techniques to be used, date from 2004, 2008 and 2015 respectively. The 2004 text specifically applies to the chemical composition of PM10, and in particular the concentration of specific heavy metals (arsenic, cadmium, nickel) and of organic compounds (benzo[a]pyrene as a tracer of polycyclic aromatic hydrocarbons). All other regulated gaseous and particulate pollutants are covered by the 2008 directive which was updated by the 2015 text. These regulations determine our actions, but as end users, we also have the opposite role: that of participating in the drafting and revision of standards texts at the European level. The LCSQA provides technical and scientific expertise concerning the application and evolution of these regulations. For example, we’re currently working hard to develop the technical guidelines which will be used for measuring pesticides. We also play a role in verifying the technical compliance of common instruments as well as innovative ones used to improve the performance of real-time measurement, which is essential to having access to a high enough quality of information to be able to take appropriate steps more quickly.

What does this challenge of improving measuring time represent?

FM: Air quality is one of people’s biggest concerns. There is no point in finding out today that we breathed poor quality air last night; the damage has already been done. Quicker information enables us to take preventive action earlier, and therefore be more effective in helping populations to manage the risk of exposure. It also allows us to take action more quickly to address the causes: regulating traffic, working with industry, citizen initiatives, etc. So it’s a big challenge. To rise to this challenge, real-time measurement — provided it is of sufficient quality — is our main focus. In our current system, for a certain number of the pollutants involved, the methodology is based on using an instrument to collect a sample, sending it to a laboratory for analysis, and reporting the results. The idea is to make this measurement chain as short as possible through direct, on-site analysis, with results reported at the time samples are collected, to the extent possible. This is where our role in qualifying devices is meaningful. These new systems have to produce results that meet our needs and are reliable, ideally approaching the level of quality of the current system.

 

[divider style=”normal” top=”20″ bottom=”20″]

A MOOC to learn all about air pollution

On October 8, IMT launched a MOOC dedicated to air quality, drawing on the expertise of IMT Lille Douai. It presents the main air pollutants and their origin, whether man-made or natural. The MOOC will also provide an overview of the health-related, environmental and economic impacts of air pollution.

[divider style=”normal” top=”20″ bottom=”20″]

The contest for the worst air pollutant

Laurent Alleman, IMT Lille Douai – Institut Mines-Telecom

[divider style=”normal” top=”20″ bottom=”20″]

[dropcap]I[/dropcap]n its report published on June 28, 2018, the French Agency for Health Safety (ANSES) presented a list of 13 new priority air pollutants to monitor.

Several air pollutants that are harmful to human health are already regulated and closely monitored at the European level (in accordance with the guidelines from 2004 and 2008): NO2, NO, SO2, PM10, PM2,5, CO, benzene, ozone, benzo(a)pyrene, lead, arsenic, cadmium, nickel, gaseous mercury, benzo(a)anthracene, benzo(b)fluoranthene, benzo(j)fluoranthene, benzo(k)fluoranthene, indeno(1,2,3,c,d)pyrene and dibenzo(a,h)anthracene.

While some pollutants like ozone and PM10 and PM2.5 particles are famous and often cited in the media, others remain much less known. It should also be noted that this list is still limited, considering the significant number of substances emitted into the atmosphere.

So, how were these 13 new pollutants identified by ANSES? What were the criteria? Let’s take a closer look.

The selection of candidates

Identifying new priority substances to monitor in the ambient air is a long but exciting process. It’s a little like choosing the right candidate in a beauty contest! First, independent judges and experts in the field must be chosen. Next, the rules must be determined for selecting the best candidates from among the competition.

Over the past two years, the working group of experts developed a specific method for considering the physical and chemical diversity of the candidates present in ambient air.

To gather all the participants at this “beauty contest”, the experts first created a core list of chemical pollutants of interest that were not yet regulated. The experts did not include certain candidates, such as pesticides, pollen and mold, greenhouse gases and radioelements, because they were being assessed in other studies or were outside their scope of expertise.

This core list is based on information provided by Certified Associations of Air Quality Monitoring (AASQA) and French research laboratories like the Laboratoire des Sciences du Climat et de l’Environnement (LSCE) and the Laboratoire Interuniversitaire des Systèmes Atmosphériques (LISA). It is also informed by consultation with experts from national and international organizations like the European Environment Agency (EEA) and from Canada and the United States (US-EPA), as well as by inventories established by international organizations like WHO.

Finally, this list was supplemented by an in-depth study of recent international and national scientific publications on what are considered “emerging” pollutants.

This final list included 557 candidates! Just imagine the stampede!

Ranking the finalists

The candidates are then divided into four categories, based on the data available on atmospheric measurements and their intrinsic danger.

Category 1 includes substances that present potential health risks. Then there are categories 2a and 2b for candidates on which more data must be acquired from air measurements and studies on health impacts. Non-priority substances–with concentrations in the ambient air and health effects that do not reveal any health risks–are placed in category 3.

Certain exceptional candidates were reclassified, such as ultrafine particles (with diameters of less than 0,1 µm) and carbon soot, due to their potential health impacts on the population.

Finally, the experts prioritized the identified pollutants in category 1 to select the indisputable winner of this unusual beauty contest.

And the winner is…

Gas 1,3-Butadiene ranked number one out of the 13 new air pollutants to monitor, according to ANSES. It is followed by ultrafine particles and carbon soot, for which increased monitoring is recommended.

1,3-Butadiene is a toxic gas that originates from several combustion sources including exhaust-pipe emissions from motor vehicles, heating, and industrial activities (plastic and rubber). Several temporary measurement campaigns in France revealed that the pollutant frequently exceeded its toxicological reference value (TRV)–a value that establishes a relationship between a dose and the effect.

Its top spot on the podium comes as no surprise: it had already won a trophy in the United Kingdom and Hungary, two countries that have reference values for its concentration in the air. In addition, the International Agency for Research on Cancer (IARC) classified 1,3-butadiene as a known carcinogen for humans as early as 2012.

As for the ten other pollutants on the ANSES list, increased monitoring is recommended. These ten pollutants, with exceedances in TRV observed in specific (especially industrial) contexts are, in decreasing order of risk, manganese, hydrogen sulfide, acrylonitrile, 1,1,2-trichloroethane, copper, trichloroethylene, vanadium, cobalt, antimony and naphthalene.

This selection is a first step towards 1,3-butadiene being added to a list of substances that are currently regulated in France. If the French government forwards this proposal to the European Commission, by the end of 2019 it could be included in the ongoing revision of the 2008 directive on monitoring air quality.

Since this classification method is adaptive, there is a good chance that new competitions will be organized in the coming years to identify other candidates.

Laurent Alleman, Associate Professor, IMT Lille Douai – Institut Mines-Télécom

The original version of this article was published on The Conversation.

 

Earth

Will the earth stop rotating after August 1st?

By Natacha Gondran, researcher at Mines Saint-Étienne, and Aurélien Boutaud.
The original version of this article (in French) was published in The Conversation.

[divider style=”normal” top=”20″ bottom=”20″]

[dropcap]I[/dropcap]t has become an annual summer tradition, much like France’s Music Festival or the Tour de France. Every August, right when French people are focused on enjoying their vacation, an alarming story begins to spread through the news: it’s Earth Overshoot Day!

From this fateful date through to the end of the year, humanity will be living at nature’s expense as it runs up an ecological debt. Just imagine vacationers on the beach or at their campsite discovering, through the magic of mobile networks, the news of this imminent breakdown.

Will the earth cease to rotate after August 1st? The answer is… No. There is no need to panic (well, not quite yet.) Once again, this year, the Earth will continue to rotate even after Earth Overshoot Day has come and gone. In the meantime, let’s take a closer look at how this date is calculated and how much it is worth from a scientific standpoint.

Is the ecological footprint serious?

Earth Overshoot Day is calculated based on the results of the “ecological footprint”, an indicator invented in the early 1990s by two researchers from the University of British Columbia in Vancouver. Mathis Wackernagel and William Rees sought to develop a synoptic tool that would measure the toll of human activity on the biosphere. They then thought of estimating the surface area of the land and the sea that would be required to meet humanity’s needs.

More specifically, the ecological footprint measures two things: on the one hand, the biologically productive surface area required to produce certain renewable resources (food, textile fibers and other biomass); on the other, the surface area that should be available to sequester certain pollutants in the biosphere.

In the early 2000s the concept proved extremely successful, with a vast number of research articles published on the subject, which contributed to making the calculation of the ecological footprint more robust and detailed.

Today, based on hundreds of statistical data entries, the NGO Global Footprint Network estimates humanity’s ecological footprint at approximately 2.7 hectares per capita. However, this global average conceals huge disparities: while an American’s ecological footprint exceeds 8 hectares, that of an Afghan is less than 1 hectare.

Overconsumption of resources

It goes without saying: the Earth’s biologically productive surfaces are finite. This is what makes the comparison between humanity’s ecological footprint and the planet’s biocapacity so relevant. This biocapacity represents approximately 12 billion hectares (of forests, cultivated fields, pasture land and fishing areas), or an average of 1.7 hectares per capita in 2012.

The comparison between ecological footprint and biocapacity therefore results in this undeniable fact: each year, humanity consumes more services from the biosphere that it can regenerate. In fact, it would take one and a half planets to sustainably provide for humanity’s needs. In other words, by August, humanity has already consumed the equivalent of the world’s biocapacity for one world.

These calculations are what led to the famous Earth Overshoot Day.

Legitimate Criticism

Of course, the ecological footprint is not immune to criticism. One criticism is that it focuses its analysis on the living portion of natural capital only and fails to include numerous issues, such as the pressure on mineral resources and chemical and nuclear pollution.

The accounting system for the ecological footprint is also very anthropocentric: biocapacity is estimated based on the principle that natural surfaces are at humanity’s complete disposal, ignoring the threats that human exploitation of ecosystems can pose for biodiversity.

Yet most criticism is aimed at the way the ecological footprint of fossil fuels is calculated. In fact, those who designed the ecological footprint based the concept on the observation that fossil fuels were a sort of “canned” photosynthetic energy–since they resulted from the transformation of organic matter that decomposed millions of years ago. The combustion of this matter therefore amounts to transferring carbon of organic origin into the atmosphere. In theory, this carbon could be sequestered in the biosphere… If only the biological carbon sinks were sufficient.

Therefore, what the ecological footprint measures is in fact a “phantom surface” of the biosphere that would be required to sequester the carbon that is accumulating in the atmosphere and causing the climate change we experience. This methodological discovery makes it possible to transfer tons of CO₂ into “sequestration surfaces”, which can then be added to the “production surfaces”.

While this is a clever principle, it poses two problems: first, almost the entire deficit observed by the ecological footprint is linked to the combustion of fossil fuels; and second, the choice of the coefficient between tons of CO₂ and sequestration surfaces is questionable, since several different hypotheses can produce significantly different results..

Is the ecological deficit underestimated?

Most of this criticism was anticipated by the designers of the ecological footprint.

Based on the principle that “everything simple is false, everything complex is unusable” (Paul Valéry), they opted for methodological choices that would produce aggregated results that could be understood by the average citizen. However, it should be noted that, for the most part, these choices were made to ensure that the ecological deficit was not overestimated. Therefore, a more rigorous or more exhaustive calculation would result in the increase of the observed deficit… And thus, an even earlier “celebration” of Earth Overshoot Day.

Finally, it is worth noting that this observation of an ecological overshoot is now widely confirmed by another scientific community which has, for the past ten years, worked in more detail on the “planetary boundaries” concept.

This work revealed nine areas of concern which represent ecological thresholds beyond which the conditions of life on Earth could no longer be guaranteed, since we would be leaving the stable state that has characterized the planet’s ecosystem for 10,000 years.

For three of these issues, it appears the limits have already been exceeded: the species’ extinction rate, and the balance of the biogeochemical cycle of nitrogen and that of phosphorus. We are also dangerously close to the thresholds in the areas of climate change and the use of land surface. In addition, we cannot rule out the possibility of new causes for concern arising in the future.

Earth Overshoot Day is therefore worthy of our attention, since it reminds us of this inescapable reality: we are exceeding several of our planet’s ecological limits. Humanity must take this reality more seriously. Otherwise, the Earth might someday continue to rotate… but without us.

 

[divider style=”normal” top=”20″ bottom=”20″]

Aurélien Boutaud and Natacha Gondran co-authored « L’empreinte écologique » (éditions La Découverte, 2018).