sable, sand

Sand, an increasingly scarce resource that needs to be replaced

Humans are big consumers of sand, to the extent that this now valuable resource is becoming increasingly scarce. Being in such high demand, it is extracted in conditions that aren’t always respectful of the environment. With the increasing scarcity of sand and the sometimes devastating consequences of mining at beaches, it is becoming crucial to find alternatives. Isabelle Cojan and Nor-Edine Abriak, researchers in geoscience and geomaterials at Mines ParisTech and IMT Lille Douai respectively, explain the stakes involved with regards this resource.

 

“After air and water, sand is the third most essential resource for human beings” explains Isabelle Cojan, researcher in geoscience at Mines ParisTech. Sand is, of course, used indirectly by most of us, since the major part of this resource is consumed by the construction industry. 15 to 20 billion tons of sand are used every year in the construction of buildings and roads all over the world and in land reclamation. In comparison, the amount used in other applications, such as micro-computing or glass, detergent and cosmetics manufacturing, is around 0.2 billion tons. On an individual scale, global sand consumption stands at 18 kg per person per day.

At first glance, our planet could easily provide the enormous amount of sand required by humanity. A quarter of the surface of the Earth’s continents is covered by huge deserts of which nearly 20% of the surface is occupied by dune fields. The problem is that this aeolian sand is unusable for construction or reclamation: “The grains of desert sand are too smooth, too fine, and cannot bind with the cements to make concrete,” explains Isabelle Cojan. For other reasons, including economic ones, marine sands can only be exploited at shallow depths, as is currently the case in the North Sea. The volume of the Earth’s available sand is significantly smaller once these areas are removed from the equation.

Deserts are poor suppliers of construction sand. The United Arab Emirates is therefore paradoxically forced to import sand to support the development of its capital, Dubai, which is located just a stone’s throw from the immense desert of the Arabian Peninsula.

 

Deserts are poor suppliers of construction sand. The United Arab Emirates is therefore paradoxically forced to import sand to support the development of its capital, Dubai, which is located just a stone’s throw from the immense desert of the Arabian Peninsula.

So where do we find this precious sediment? Silica-rich sands, such as those found under the Fontainebleau forest, are reserved for the production of glass and silicon. On the other hand, the exploitation of fossil deposits is limited by the anthropization of certain regions, which makes it difficult to open quarries. For the construction industry, the sands of rivers and coastal deposits fed partly by river flow must be used. The grains of alluvium from beaches are angular enough to cling to cements. Although the quantities of sand on beaches may seem huge to the human eye, they are not really that big. In fact, most of the sediments in the alluvial plains of our rivers and coastlines are inherited from the large-scale erosion that occurred during the Quaternary Ice Age. Today, with building developments along riverbanks and less erosion, Isabelle Cojan points out that “we consume twice as much sand as the amount the rivers bring to the coastal regions.”

Dams not only retain water, they also prevent sediment from descending downstream. The researcher at Mines ParisTech points to the example of the watercourse of the Durance, a tributary of the Rhône that she studied: “Before development, it deposited 3 million tons of sediment in the Mediterranean every year. Today, this quantity is between 0.1 and 0.5 million tons.” Most of the sediment provided by the erosion of the watershed is trapped by infrastructure, and settles to the bottom of artificial water reservoirs.

The sand rush and its environmental impact

As a result of the lower release of sediment into the seas and oceans, some beaches are naturally shrinking, especially where the extraction industry takes sand from the coasts. Countries are thus in a critical situation where entire beaches are disappearing. This is the case in Togo, for example, and several other African countries where sand is extracted without too many legislative constraints. “The Comoros derives a lot of its income from tourism, and is extracting more and more sand from beaches to support its economic development because there is no significant reserve of sand elsewhere on the islands,” explains Isabelle Cojan. The extraction of large volumes of sand leads to coastal erosion and land receding to the sea. The situation is similar in other parts of the world. “Singapore has significantly increased its surface area by developing polders in the sea,” the researcher continues. “Nearby islands at the surface of the water providing an easy supply of sand disappeared before the regulations of the countries concerned prohibited such extraction.

In Europe, practices are more closely controlled. In France, in particular, extraction from riverbeds – a practice which dates back thousands of years – was carried out until the 1970s. The poorly regulated exploitation of alluvial deposits led to changes in river profiles, leading to the scouring of bridge anchorages, which then became fragile and sometimes even collapsed. Extraction from rivers has since been forbidden, and only deposits on alluvial plains that do not directly impact riverbeds may be used for extraction. On the coast, exploitation is regulated by the French Environmental Code and Mining Code, which prohibit any extraction that could directly or indirectly compromise the integrity of a beach.

However, despite this legislation, indirect adverse consequences are difficult to prevent. The impact of sand extraction in the medium term is complex for researchers to model. “In coastal areas, we have to take account of a set of complex processes linked to tides, storms, coastal drift, vegetation cover, tourism and port facilities lists Isabelle Cojan. In some cases, sustainable extraction entails no danger. In other situations, a slight change in the profile of the beach could have serious consequences. “This can lead to a significant retreat of the coastline during storms and flooding of the hinterland by marine waters, especially during equinox storms,” the researcher continues. On coastline that is undergoing natural erosion with low-relief hinterlands, sand extraction may, over time, lead to an irreversible destabilization of the coast.

The implications of the disappearance of beaches are not only aesthetic. Beaches and the dunes that very often lie along their edge constitute a natural sedimentary barrier against the onslaught of the waves. They are the primary and most important source of protection against erosion. Beaches limit, for example, the retreat of chalk cliffs. On coasts with low landforms, beach-dune systems form a barrier against the entry of the sea into the land. When water breaches the natural sediment barrier, salinization of fields can occur leading to a drastic change in farming conditions and can ruin arable land.

What are the alternatives to sand?

Faced with the scarcity of this resource, new avenues are being explored to find an alternative to sand. Recycled concrete, glass or metallurgical waste can be used to replace sediment in the composition of concrete. However, building materials produced in this way encounter performance problems: “They age very quickly and can release pollutants over time,” explains Isabelle Cojan. Another limitation is that these alternative options are currently not sufficient in volume. France produces 370 million tons of sand annually, whereas recycling only produces 20 million tons.

Also read on I’MTech Recycling concrete and sediment to create new materials

Major efforts to structure a dedicated recycling sector would be necessary, with all the economic and political debates at national and local levels that this implies. Since manufacturers want high-performance products, this could not be done until research has found a way to limit the aging and pollution of materials made from recycled materials. While recycling should not be ruled out, it is clear that the solution is only feasible over a relatively long time scale.

In the shorter term, another alternative could come from the use of other sand deposits currently considered as waste. Through pioneering work in geomaterials at IMT Lille Douai, Nor-Edine Abriak has demonstrated that it is possible to exploit dredging sand. This sediment comes from the bottom of rivers and streams, and is extracted for water course development. Dredging is mainly used to allow waterway navigation and large quantities of sand are extracted every year from ports and river mouths. “When I started my research on the subject a few years ago, the port of Dunkirk was very congested with sediments,” recalls the researcher. He joined forces with the local authorities to set up a research chair called Ecosed in order to find a way to use this sand.

For the construction industry, the major drawback of dredging sediments is their high content in clay particles. “Clay is a nightmare for people working with concrete,” warns Nor-Edine Abriak. “The particles can swell, increasing the setting time of the cement in the concrete and potentially diminishing the performance of the final material.” It is customary to use a sieve to separate the clay, which requires equipment and dedicated logistics, and therefore additional costs. For this reason, these sediments are rejected by industrial players. “The only way to be competitive with these sediments is to be able to use them as they are, without a separation process,” admits the leader of the Ecosed Chair. The research at IMT Lille Douai has led to a more convenient and rapid treatment process using lime that eliminates the need for sieving. This process also breaks down the organic matter in the sediments and improves the setting of the cement, making it possible to rapidly use the sand extracted from the bottom of the port of Dunkirk.

The Ecosed Chair has also provided a way to overcome another problem, that of the salinity of these shallow sands in contact with seawater. Salt corrodes materials, therefore shortening the useful life of concrete. To remove it, the researchers used a simple water wash. “We showed that the dredging sand could simply be stored in large lagoons and the salt would be drained by the rain,” explains Nor-Edine Abriak. The solution is a simple one, provided there is enough space nearby to build the lagoons, which is the case in the area around Dunkirk.

With these results, the team of scientists demonstrated that dredging sediments extracted from the bottom of ports could be used as an alternative to beach sediments, instead of being considered as waste. “We were the first in the world to prove this was possible,” Nor-Edine Abriak says proudly. This solution does not provide a full replacement, as dredged sediments have different mechanical properties that must be taken into account in order not to affect the durability of the materials. The first scaling tests showed that dredging sands could be used in proportions of up to 70% in the composition of materials for roads and 12% for buildings, with no loss of quality for the end material.

Because almost all ports have to carry out dredging, this research offers a major opportunity to reduce sand extraction from beaches and alluvium. In early March, the Ecosed team went to Morocco to launch a second chair on waste recovery and dredging sands in particular: “the same situation as in Dunkirk can be seen in Tangier and Agadir,” explains the researcher, highlighting the global nature of the problem. In June 2019, the Ecosed Chair in France became Ecosed Digital 4.0, going from a budget of €2 million to €24 million with the aim of structuring a specific sector for the recovery of dredged sediments in France. While this work alone will not fully solve the problem of sand scarcity, it will nevertheless start an impetus to reduce sand extraction in areas where such mining is threatening. It must also be ensured that this type of initiative is scaled up, both nationally and internationally.

lumière, intelligence artificielle, artificial intelligence, AI

Light, a possible solution for a sustainable AI

Maurizio Filippone, Professor at EURECOM, Institut Mines-Télécom (IMT)

[divider style=”normal” top=”20″ bottom=”20″]

We are currently witnessing a rapidly growing adoption of artificial intelligence (AI) in our everyday lives, which has the potential to translate into a variety of societal changes, including improvements to economy, better living conditions, easier access to education, well-being, and entertainment. Such a much anticipated future, however, is tainted with issues related to privacy, explainability, accountability, to name a few, that constitute a threat to the smooth adoption of AI, and which are at the center of various debates in the media.

A perhaps more worrying aspect is related to the fact that current AI technologies are completely unsustainable, and unless we act quickly, this will become the major obstacle to the wide adoption of artificial intelligence in society.

AI and Bayesian machine learning

But before diving into the issues of sustainability of AI, what is AI? AI aims at building artificial agents capable of sensing and reasoning about their environment, and ultimately learning by interacting with it. Machine Learning (ML) is an essential component of AI, which makes it possible to establish correlations and causal relationships among variables of interest from data and prior knowledge of the processes characterizing the agent’s environment.

For example, in life sciences, ML can be helpful to determine the relationship between grey matter volume and the progression of Alzheimer disease, whereas in environmental sciences it can be useful to estimate the effect of CO2 emissions on climate. One key aspect of some ML techniques, in particular Bayesian ML, is the possibility to do this by account for the uncertainty due to the lack of knowledge of the system, or the fact that a finite amount of data is available.

Such uncertainty is of fundamental importance in decision making when the cost associated with different outcomes is unbalanced. A couple of examples of domains where AI can be of tremendous help include a variety of medical scenarios (e.g., diagnosis, prognosis, personalised treatment), environmental sciences (e.g., climate, earthquake/tsunami), and policy making (e.g., traffic, tackling social inequality).

Unsustainable AI

Recent spectacular advances in ML have contributed to an unprecedented boost of interest in AI, which has triggered huge amounts of private funding into the domain (Google, Facebook, Amazon, Microsoft, OpenAI). All this is pushing the research in the field, but it is somehow disregarding its impact on the environment. The energy consumption of current computing devices is growing at an uncontrolled pace. It is estimated that within the next ten years the power consumption of computing devices will reach 60% of the total amount of energy that will be produced, and this will become completely unsustainable by 2040.

Recent studies show that the ICT industry today is generating approximately 2% of global CO₂ emissions, comparable to the worldwide aviation industry, but the sharp growth curve forecast for ICT-based emissions is truly alarming and far outpaces aviation. Because ML and AI are fast growing ICT disciplines, this is a worrying perspective. Recent studies show that the carbon footprint of training a famous ML model, called auto-encoder, can pollute as much as five cars in their lifetime.

If, in order to create better living conditions and improve our estimation of risk, we are impacting the environment to such a wide extent, we are bound to fail. What can we do to radically change this?

Let there be light

Transistor-based solutions to this problem are starting to appear. Google developed the Tensor Processing Unit (TPU) and made it available in 2018. TPUs offer much lower power consumption than GPUs and CPUs per unit of computation. But can we break away from transistor-based technology for computing with lower power and perhaps faster? The answer is yes! In the last couple of years, there have been attempts to exploit light for fast and low-power computations. Such solutions are somewhat rigid in the design of the hardware and are suitable for specific ML models, e.g., neural networks.

Interestingly, France is at the forefront in this, with hardware development from private funding and national funding for research to make this revolution a concrete possibility. The French company LightOn has recently developed a novel optics-based device, which they named Optical Processing Unit (OPU).

“Optical computing leading the AI scale-up”, Igor Carron, CEO, LightOn (CognitionX video, 2018).

 

In practice, OPUs perform a specific operation, which is a linear transformation of input vectors followed by a nonlinear transformation. Interestingly, this is done in hardware exploiting the properties of scattering of light, so that in practice these computations happen at the speed of light and with low power consumption. Moreover, it is possible to handle very large matrices (in the order of millions of rows and columns), which would be challenging with CPUs and GPUs. Due to the scattering of light, this linear transformation is the equivalent of a random projection, e.g. the transformation of the input data by a series of random numbers whose distribution can be characterized. Are random projections any useful? Surprisingly yes! A proof-of-concept that this can be useful to scale computations for some ML models (kernel machines, which are alternative to neural networks) has been reported here. Other ML models can also leverage random projections for prediction or change point detection in time series.

We believe this is a remarkable direction to make modern ML scalable and sustainable. The biggest challenge for the future, however, is how to rethink the design and the implementation of Bayesian ML models so as to be able to exploit the computations that OPUs offer. Only now we are starting developing the methodology needed to fully take advantage of this hardware for Bayesian ML. I’ve recently been awarded a French fellowship to make this happen.

It’s fascinating how light and randomness are not only pervasive in nature, they’re also mathematically useful for performing computations that can solve real problems.

[divider style=”normal” top=”20″ bottom=”20″]

Created in 2007 to help accelerate and share scientific knowledge on key societal issues, the Axa Research Fund has been supporting nearly 600 projects around the world conducted by researchers from 54 countries. To learn more, visit the site of the Axa Research Fund.The Conversation

Maurizio Filippone, Professor at EURECOM, Institut Mines-Télécom (IMT)

Cet article est republié à partir de The Conversation sous licence Creative Commons. Lire l’article original.

serious games

Serious games: when games invade the classroom

Over the last few years, a new teaching method linked to the invasion of digital technology in our daily lives has begun shaking up traditional learning methods. The primary purpose of these serious games is not entertainment. The developing sector does not seek to substitute, but rather supplement—or at least earn its place—in the arsenal of existing educational tools. Imed Boughzala, a researcher in management at Institut Mines-Télécom Business School offers a closer look at this phenomenon.

 

Video games are all the rage. According to SELL, a French organization promoting the interests of video game developers, in 2018, this market was estimated at nearly €5 billion and is steadily growing, with more and more people playing and consuming video games. In fact, the video game industry is now doing better than the book market. This clearly creates an opportunity for teachers to take advantage of this gaming culture and break away from traditional learning methods.

Discover the history of Ancient Egypt with Assassin’s Creed, use the popularity of a game like Fortnite to raise awareness about climate change or develop strategy skills with Civilization or Warcraft. While some teaching methods in France are beginning to adopt these games, research in this area remains limited. It is therefore difficult to assess the effectiveness of these new spaces for informal learning. Imed Boughzala first embarked on this adventure nearly 10 years ago:

“In 2008, while traveling in the United States as a guest professor with the Management Department at the University of Arkansas, I had the opportunity to create a distance learning course on information systems on a platform called Second Life, a platform that was ahead of its time and still exists. A few months later, the university campus had to close due to an avian flu outbreak. We therefore began to focus on implementing a completely virtualized training program. At the time, I designed a serious game for students stuck at home.”

Back in France after this experience, he continued his research on collaborative games. He led his research team, SMART² (Smart Business Information Systems) on a mission to pursue the digital transformation of organizations. “We began imagining how educational tools could be used to motivate students more and seeking a method for getting their attention. We began with the observation that a wide gap currently exists between the digital culture of young people and the university culture,” Imed Boughzala explains. In addition, when students play a role, there are many motivational factors involved, from moving from one level to the next, to receiving awards, and making a mistake and starting over immediately, testing and learning.

Playing for the sake of learning

But what do we really mean by a serious game? New digital practices that cover several key concepts: A serious game is a video game created for educational or practical purposes. Serious gaming is a broader concept that refers to the way certain games can be used as serious tools. Finally, gamification refers to adding a fun aspect to a serious subject.

For Imed Boughzala, it all started with a very practical situation. “At Institut Mines-Télécom Business School, 200 management students were enrolled in our program. Capturing their attention was very complicated when it came to very technical topics. The atmosphere in class and the exams were not always great. So why not have them play a game? By sheer coincidence, one day we came across a game by IBM. It was INNOV8, which aims to help future entrepreneurs develop certain computer and business skills,” the researcher explains.

Virtual worlds and serious games therefore helped the students tackle decision-making processes that are very real. “It was an immediate success, which led us to create a new, more customized scenario, and teach them how to create data patterns. Instead of doing exercises, this allowed them to play as much as necessary to understand how the tool behind the game is implemented. We therefore tried to take into account the technical aspects: how the game is played, how it is used and the specific context, that of Millennials,” the researcher explains.

Innovating through serious games

Does digital technology truly transform our relationship with knowledge? Is a good serious game worth more than a long speech? For Imed Boughzala, there’s no doubt about it. A fun game can simulate a real professional environment. Students become active participants in their learning as they are confronted with a problem, a dilemma they must resolve. “This is an important thing for a generation that quickly jumps from one thing to the next. We can try to fight against this reality, but it is quite clear: We can no longer teach the same way. We must add variety to our teaching outlines and add games to give everyone a breath of fresh air. That’s the true benefit.”

While it has now become necessary to use entertainment to reach training objectives, Imed Boughzala sought to link the development of these teachings to his research. He did this by focusing on the effectiveness and assessment of serious games in training programs. This is a complex subject because “we must distinguish between the performance perceived due to the format, content and presentation as a fun game, and the measurable assessment, the real performance. In other words, the knowledge related to professional activities that has actually been gained.”

The researcher is already convinced by the results of the gamification of certain educational processes, especially in learning complex procedures and in many different areas of management techniques, finance, city administration, sustainable development and healthcare and medicine. The immersive and interactive serious game also tests the student’s collective intelligence. For example, the Foldit project, an experimental video game created in 2008 on protein folding. Whereas scientists had spent 10 years searching for the three-dimensional structure of a protein of an AIDS virus in monkeys, the “players” were able to find a solution in three weeks, leading to the development of new antiretroviral drugs.

These practical cases can now be added to scientific databases made available to the scientific community. The institutional community is also beginning to recognize these realities, with The French Foundation for Management Education (FNEGE) creating a certification board to assess these new digital tools. This board will assess the educational added value of these tools, in other words, their ability to meet the defined learning goals. Solving puzzles, creating, experiencing, participating—serious games offer new was of learning and highlight the importance of variety.  Since video games can motivate users to become intensely involved for unprecedented period of time, their educational counterparts are completely appropriate for training purposes.

Article written for I’MTech by Anne-Sophie Boutaud

feu, fire

Fighting fire: from ancient Egypt to Notre-Dame de Paris

Article written in partnership with The Conversation France.
By Rodolphe Sonnier, IMT Mines Alès.
This article was co-authored by Clément Lacoste (IMT Mines Alès), Laurent Ferry (IMT Mines Alès) and Henri Vahabi (Université de Lorraine).The Conversation

[divider style=”normal” top=”20″ bottom=”20″]

[dropcap]T[/dropcap]he discovery of fire is often cited as the most important discovery in the history of mankind, given its major impact on the development of the Homo genus. By reducing the amount of energy required to digest food, cooking led to an increase in brain size. Fire seems to have been mastered approximately 400,000 years ago, although evidence of its use much earlier has been found. However, with urbanization, fire has also become a serious problem when it spreads uncontrollably. Examples include the great fire of Rome in the year 64 AD or the recent fire at the Notre Dame de Paris Cathedral.

What is fire?

A fire requires a combination of three elements: a fuel source, an oxidizer and a heat source. This combination of elements is called the fire triangle. These elements interact through a complex process involving physical phenomena, such as heat transfer, and chemical phenomena, such as pyrolysis of the fuel source and combustion of the pyrolysis products.

Technically, a distinction is made between reaction to fire and fire resistance. Reaction to fire involves the combustible materials, which are likely to release heat when they decompose as a result of the temperature and in the presence of an oxidizer (most often the oxygen present in the air). Fire resistance considers an element’s ability to maintain its load-bearing capacity, thermal insulation and smoke and gas tightness properties during a fire. Since wood is a combustible material used as a structural element in buildings, it is considered in light of both of these aspects, which rely on specific standards and a variety tests.

When it comes to fighting fire, there are two strategies which are not mutually exclusive. The first calls for using what are referred to as active systems in the event of a fire: extinguishers, smoke detectors or automatic sprinkler systems. The second consists in using materials that will contribute as little as possible to the propagation of the fire.

Fireproofing

Since many materials, including most plastics and wood, are naturally highly flammable, additives called flame retardants must be incorporated within or on the surface of the flammable material. These flame retardants make it possible to modify the material’s behavior by disrupting the fire triangle.

Their effects are mainly to delay the appearance of flames, slow down flame propagation speed, reduce the heat released and the power of the fire, and limit the opacity and toxicity of the smoke.  All these effects are assessed through standardized reaction to fire tests. They result in classifications that determine the potential use of a material for a given application according to regulations. There is no universal flame retardant. A fireproofing system must be tailored to the material it is intended to protect, in particular by taking into consideration its decomposition process. Furthermore, the choice of a flame retardant is also informed by the process used to manufacture the material and must not have a significant effect on its intended functional properties.

Archeologists place the beginnings of fireproofing in antiquity. Around 400 BC, the Egyptians used minerals to make certain fabrics like cotton or linen fire-resistant. Later,  during the siege of Piraeus (23 BC), alum solutions were used to make the wooden ramparts fire resistant. Yet it was not until 18 June 1735 that Englishman Obadiah Wyld would file the first patent, patent number 551, for a cotton treatment. In the 19th century, the king of France, Louis XVIII, requested that a solution be found to prevent fires in Parisian theaters which were lit with candles.  Joseph Louis Gay-Lussac filed a patent for the use of a mixture of ammonium phosphate, ammonium chloride and borax to fireproof the curtains in theaters.

Flame retardants

There are several families of flame retardants, which are based on different chemical elements and work in various ways. Historically, halogenated molecules containing chlorine or bromine have been widely used since they are effective even in small quantities. These molecules act by disrupting the combustion reactions that take place within flames, which makes it easier to extinguish them and limits the amount of energy released. This is referred to as flame inhibition. However, the toxic nature of certain halogen compounds has led to a ban on their use. Since it is impossible during recycling to easily distinguish between authorized and prohibited brominated molecules, it is no longer possible to recycle plastics treated with these flame retardants. Moreover, these molecules lead to the formation of opaque, corrosive smoke in the event of a fire. For all these reasons, this family of fire retardants has increasingly come under scrutiny.

It has been replaced largely by phosphorous flame retardants. Given the wide variety of such retardants , they are able to act in a number of ways. But the main mode of action remains facilitating the formation of a residual layer on the surface of the combustible material, protecting the healthy part of the material. The strategy consists in disrupting pyrolysis reactions (decomposition of the material due to the heat) and facilitating the formation of a thermally stable residue rich in carbon, called “char.” Particularly effective systems are called intumescent because the char forms an expanded, insulating, very protective layer. This type of intumescent system is used in protective coatings for metallic components or wood.

We can also mention metallic hydroxides, which are inexpensive but proportionally less effective, meaning they must be incorporated at higher levels (up to 65% in mass in outer sheathes for wires) to produce a significant effect. As the result of the temperature, these particles release water in the form of vapor through endothermic decomposition, thereby contributing to cooling off the material and diluting the fuel in the flame.

There are also other chemistries, based on nitrogen (melamine), boron (zinc borate) or tin (hydroxystannate) for example. Nanotechnologies have also been used in the field of fireproofing for the past fifteen years. Lamelar clay or carbon nanotube type nanoparticles improve the insulating properties of the char formed, even at low levels.  But on their own, they are insufficient for providing overall protection of the material.

And wood?

In general, materials of organic origin (derived from biological organisms) such as oil, wood and coal   all have a composition rich in carbon and hydrogen atoms, which are likely to be oxidized. They are therefore combustible. Wood is a material with a complex structure and an elemental chemical composition made up of carbon (50%), oxygen (44%), and a small amount of hydrogen (6%).

Wood is a low-density material and possesses a natural ability to char, meaning a protective layer of char is formed between healthy wood and flames. When wood is burned, it first loses water, and becomes completely dry at 120 °C. Then, its structure gradually decomposes as the temperature rises. Its components remain relatively stable up to 250 °C, the temperature at which wood starts giving off smoke. At 320 °C, there is enough gas to ignite wood. Pyrolysis takes place mainly up to 500 °C,  after which point only charcoal (char) remains, which can slowly decompose through oxidation. While the char layer slows down the pyrolysis of the underlying healthy wood, its mechanical strength is negligible. As pyrolysis continues, the useful section of a wooden structural element is therefore reduced along with its load-bearing capacity.

The flame retardants used to fireproof wood belong to the families listed above (phosphorous, boron, nitrogen, metallic hydroxides). However, unlike with plastics, it is not possible to incorporate these additives when wood is manufactured.  Fireproofing therefore takes one of two forms: applying a surface coating (paint, varnish) or impregnating the core of the wood, meaning the hollow part – called the lumen – of wood cells through an autoclave process. The process involves filling all the lumens by degassing under vacuum then forcing the flame retardant to penetrate the wood by subjecting it to high pressure. This more complex solution makes it possible to prevent a diminution of the material’s fire resistant property in the event of surface defects.  When a coating deteriorates, it can no longer play its role as a flame retardant, leaving the wood without protection in the event of a fire.

[divider style=”dotted” top=”20″ bottom=”20″]

This article was co-authored by Clément Lacoste (IMT Mines Alès), Laurent Ferry (IMT Mines Alès) and Henri Vahabi (Université de Lorraine).The Conversation

Rodolphe Sonnier, Mines Alès – Institut Mines-Télécom

The original version of this article (in French) was published on The Conversation and republished under a Creative Commons license. Read the original article.

AI Artificial Intelligence

Far from fantasy: the AI technologies which really affect us

Editorial

“Artificial Intelligence”. It’s hard to define a technology which encompasses such a large variety of tools and techniques (centralized or decentralized approaches, supervised or unsupervised learning, ontologies, etc.), with ramifications in each of these categories, ranging from neural networks to autonomous agents. The scope of AI is both broad and rich. It would therefore be a shame to allow powerful economic players to sum it up as a simple household gadget that may well be artificial, but is of questionable intelligence.

To delve into the complexity of AI is to understand the mechanisms that underlie certain products and services that we use. It also endows us with the intellectual weapons to confront the alarmist visions of a future of humanity against machines, and to remain prudent in the face of over-enthusiastic technological solutionism.

It is with this in mind that we’re publishing this dossier to go with the IMT Scientific Symposium on artificial intelligence, held on April 4. It does not seek to draw an exhaustive portrait of AI, which is still impossible. Its aim is rather to make the reader aware of the applications of AI, and the ways in which it affects us directly, as citizens and consumers.

It therefore presents four examples of research work into smart homes, the customer journey in supermarkets, flood predictions, and synchronization with machines. These concrete insights show what artificial intelligence, with all its opportunities and limits, really represents.

[one_half]

[/one_half][one_half_last]

[/one_half_last]

[one_half]

[/one_half][one_half_last]

[/one_half_last]

To go further on the theme of artificial intelligence and its impact on the citizen-consumer, I’MTech offers a selection of our archives on the subject:

 

flood water level

How can AI help better forecast high and low water levels?

Predicting the level of water in rivers or streams can prove to be invaluable in areas with a high risk of flooding or droughts. While traditional models are based primarily on hypotheses about processes, another approach is emerging: artificial neural networks. Anne Johannet, an environmental engineering researcher at IMT Mines Alès, uses this approach.

This article is part of our dossier “Far from fantasy: the AI technologies which really affect us.

In hydrology, there are two especially important notions: high and low water levels. The first describes a period in which the flow of a watercourse is especially high, while the second refers to a significantly low flow. These variations in water level can have serious consequences. A high water level can, for example, lead to flooding (although this is not systematic), while a low water level can lead to restrictions on water abstraction, in particular for agriculture, and can harm aquatic ecosystems.

Based on past experience, it is possible to anticipate which watercourses tend to rise to a certain level in the event of heavy precipitation. This approach may obtain satisfactory results but clearly  lacks precision. This is why Flood Forecasting Services (SPC) also rely on one of two types of models. The first is called “reservoir modeling”: it treats a drainage basin like a reservoir, which overflows when the water content exceeds its filling capacity. But forecasts made based on this type of model may contain major errors, since they do not usually take into account soil heterogeneity or variability of drainage basin use.

The other approach is based on a physical model. The behavior of a studied watercourse is simulated using differential equations and field measurements. This type of model is therefore meant to take all the data into account in order to provide reliable predictions. However, it reaches its limits when faced with high variability, as is often the case: how land reacts to precipitation may depend on human activity, type of agriculture, seasons, existing vegetation etc. As a result, “it is very difficult to determine the initial state of a watercourse,” says Anne Johannet, an environmental engineering researcher at IMT Mines Alès. “It is the major unknown variable in hydrology, along with the unpredictability of rainfall.”  Therefore, the reality may ultimately conflict with forecasts, as was the case with the exceptional rising of the Seine in 2016. Moreover, certain drainage basins are little addressed by physical models due to their complexity. The Cévannes is one such example.

Neural networks learn independently

Anne Johannet’s research focuses on another approach which offers a new method for forecasting water flow: artificial intelligence. “The benefit of neural networks is that they can learn a function from examples, even if we don’t know this function”, explains the researcher.

Neural networks learn in a similar way as children do. They start out with little information and study a set of initial data, by calculating the output in a random way and inevitably making mistakes. Then, numerical analysis methods make it possible to gradually improve the model in order to reduce these errors.  In concrete terms, in hydrology, the objective of neural networks is to forecast the flow a watercourse or its water level based on rainfall. A dataset describing all the past observations about the basin is therefore used to train the model. While it is learning, the neural network calculates a flow based on precipitation, and this result is compared to real measurements. This process is then repeated several times, to correct its mistakes.

A pitfall to be avoided with this approach is that of “overlearning.”  If a neural network is “overtrained,” it can eventually lose its extrapolation quality and settle for knowing something “by heart.” To give an example, if the neural network integrates the appearance of a major rise in water level on 15 November 2002, overlearning can lead it deduce that such an event will occur every year on 15 November. To avoid this phenomenon, the dataset used to train the network is divided into two subsets: one for learning and one for validation. And as the errors are corrected on the training dataset its ability to generalize is verified using the test dataset.

The main benefit of this neural network approach is that it requires much less input data. A physical model requires a large amount of data, about the nature of the land, vegetation, slope etc. A neural network, on the other hand, “only needs the rainfall and flow at a location we’re interested in, which facilitates its implementation,”  says Anne Johannet. This leads to lower costs and provides quicker results. However, the success of such an approach relies heavily on rainfall predictions, which are used as input variables. And this precipitation remains difficult to forecast.

Clear advantages but a controversial approach

Today, Anne Johannet’s models are already used by public services including the Artois-Picardie Flood Prevention Service (in the Hauts-de-France region). Based on rainfall prediction, agents establish scenarios and study the consequences using neural networks. Depending on the type of basin — which may react more or less quickly — they are also able to make forecasts several hours or even a day in advance for high water levels, and several weeks in advance for low water levels.

This data can therefore have a direct effect on local authorities and citizens.  For example, predicting a significant low water period could lead water supply management to switch to an alternative water source, or could lead the authorities to prohibit water abstraction.  Predicting  a high water period, on the other hand, could help anticipate potential flooding, based on the land structure.

Longer-term projections can also be established using data from the IPCC (Intergovernmental Panel on Climate Change). Trial forecasts of the flow of the Albarine river in the Ain department have been carried out, up to 2070, to assess the impact of global warming. The results indicate severe low-water levels in the future, which could affect land-use planning and farming activity.

However, despite these results, the artificial intelligence approach for predicting high and low water levels has been met with mistrust by many hydrologists, especially in France. They argue that these systems are incapable of generalizing due to climate change, and largely prefer physical or reservoir models. The IMT Mines Alès researcher rejects these accusations, underscoring the rigorous validation of neural networks. She suggests that the results from the different methods should be viewed alongside one another, evoking the words of statistician George Box: “All models are wrong, but some are useful.”

Article written for I’MTech by Bastien Contreras

Gérard Dray

IMT Mines Alès | Artificial Intelligence, Machine Learning

Gérard Dray is a Full Professor at IMT Mines Alès in the Laboratory of Computer Engineering and Production Engineering (LGI2P). He researches and develops methods of information processing whose objective is the digitalization of knowledge in order to facilitate the action of the man, to make it more reliable, more efficient. These information processing procedures are mainly based on artificial intelligence and machine learning methods. Gérard Dray is responsible of scientific activities in the field of “Health, Aging, Quality of Life” through a transversal axis to IMT Mines Alès. With the aim of developing this axis in a context close to clinical issues, he coordinates the work of the dedicated ICT and Health team of LGI2P hosted at the EuroMov research center of the University of Montpellier.

[toggle title=”Find here his articles on  I’MTech” state=”open”]

[/toggle]

smart home

Smart homes: A world of conflict and collaboration

The progress made in decentralized artificial intelligence means that we can now imagine what our future homes will be like. The services offered by a smart home to its users are likely to be modeled on appliances which communicate and cooperate with each other autonomously. Today, this approach is considered the best way to control the dynamic, data-rich household environment. Olivier Boissier and Gauthier Picard, researchers in AI at Mines Saint-Étienne, are currently working on the technology. In this interview for I’MTech, they explain the interest in the decentralized approach to AI as well as how it works, through concrete examples of how it is used in the home.

This article is part of our dossier “Far from fantasy: the AI technologies which really affect us.

Can we think of smart homes as a simple network of connected objects?

Gauthier Picard: A smart home is made up of an assortment of fairly different objects. This is very different from industrial networks of sensors, in which devices are designed to have similar memory capacities and identical structures. In a house, we cannot put the same calculating capacity in a light bulb as in an oven, for example. If the occupant expects a varied number of operating scenarios with the objects coordinating together, it means that we must be able to take the objects’ differences into account.  A smart home is also a very dynamic environment. You must be able to add things such as an intelligent light bulb, or a Raspberry-type nanocomputer to control the blinds when you want to, without hindering the performance for the user.

So, how do you make a house ‘smart’ despite all this complexity?  

Olivier Boissier: We use what we call a multi-agent approach. This is central to our discipline of decentralized artificial intelligence. We use the term ‘decentralized’ instead of ‘distributed’ to really highlight that to make a house ‘smart’, we need to do more than just distribute knowledge between the different devices. The decision also needs to be decentralized.  We use the term agent to describe an object, service which will manage several objects or a service which will itself manage several services. Our aim is to make these agents organize themselves via rules which allow them to exchange information in the best way possible. But not all household objects will become agents because, as Gautier said, some objects don’t have sufficient calculating capacity and are unable to organize themselves.  Therefore, one of the biggest questions that we ask ourselves is whether an object should remain a simple object which perceives or executes things, such as a sensor or a small LED, and which objects will become agents.

Can you show how his approach works with a concrete example of how it’s used in a smart home?

GP: If we again use light bulbs and light as an example, we can imagine a user asking for the light level in their smart home to fall by 40% if they’re in their living room after 9pm. The user doesn’t care which object decides or acts to carry out the request, what interests them is having less light. It’s up to the global system to optimize the decisions by deciding which light bulb to turn off or whether the TV also needs to be turned off as it emits light even when it’s not being used, or whether it can leave the blinds open because it’s still daylight outside.  All of these decisions need to be made in a collective manner, potentially with constraints set by the occupant who might want to lower the electricity bill, for example. A centralized entity will not manage all of these decisions, instead, each element will react depending on what the other elements do.  If it is summer, and therefore still light outside, does the house need more lights on? If it does, then the agents will first turn on the bulbs which consume the least energy and emit the most light.  If this is not enough, other agents will turn on other light bulbs.

You said that the decision was not centralized.  Why don’t you just have one decision-making device which manages all the objects?  

GP: The problem with a centralized solution, is that it all depends on a single device. With this approach, it is very likely that all information will be stored on the cloud.  If the network is faulty, or if there is too much activity, then the network’s performance will be affected. The advantage of the multi-agent approach is that it also has data which is close to where the decision is being made.  Since everything is done locally, it will take less time for decisions to be made and the network will have better security. Therefore, the system is more resilient and can respond better to the privacy requirements of the users.

OB: But we are not saying that centralized solutions are necessarily worse. The multi-agent approach requires efforts to coordinate the objects and services.  It should be preferred in complex environments, where it is necessary to have data close to where the decision is being made.  If a centralized management algorithm works for a precise and simple action, then that’s fine. The multi-agent approach becomes interesting when there are large quantities of data which need to be processed quickly. This is the case when a smart home includes several users with multiple, sometimes conflicting, functions.

How can functions become conflicting?

OB: In the case of lighting, a conflicting situation would be if two users in the same room have different preferences. The same agents are asked to carry out two incompatible decision-making processes. This situation can be simplified to a conflict of resources.  Conflicts like this have a high chance of occurring because we are in a dynamic environment.  The agents make action plans to respond to the user’s demands but if another user enters in the room, the plan will be disrupted.  Therefore, conflicts can’t always be predicted in advance; they often only appear when the plan is being executed. In certain cases, simple rules mean that the problem can be resolved quickly. This happens when priority functions such as emergency assistance or the security of the building will take precedence over entertainment functions.  In other cases, ways to resolve conflicts between agents must be created.

GP: Negotiation is a good example of a technique which solves this problem. Because the conflict is a fight over a resource, each agent can coordinate a bid for the functions that it wishes to use. If it wins the bid, it accumulates a debt which prevents it from winning the next one. Over time, the agents regulate themselves.  By adopting an economic approach between agents, we can also try to find the Nash equilibrium. This means that each agent will maximize its output depending on what the rest of the agents want to do.

How do you make all of these interactions possible between agents?  

OB: There are several ways that agents can self-organize.  It can be done through stigmergy, whereby the agents don’t communicate with each other; they simply act in response to what is happening around them. This can also be in response to information that is placed in their environment by other agents, which allows them to respond to the user’s request. Another method is introducing a global behavior policy for all the agents, such as privacy, and leaving the agents to interpret it in a collective manner.  In this case, the user simply gives their preference on what they want to remain confidential and the agents communicate the information accordingly.  We try to combine these approaches by adding more coordination protocols, such as the conflict management rules which were mentioned above.

GP: All the agents have access to a definition of their environment. They know the rules and the roles that they can play, and they adapt to this environment.  It’s a bit like when you learn the Highway Code so that you know how to act when you approach a crossroads. You know what can happen and what other motorists are supposed to do.  If you find yourself in a situation which does not follow the usual rules, for example because there is a traffic jam, or an accident has happened right in the middle of the crossroad, you adapt the rules.  Agents should be able to do the same thing. They should react and change the system so that they can organize themselves and respond to the user’s demands.

In regard to this general multi-agent approach for smart houses, what can we already do and what still remains a research question?

OB: Currently, there are a lot of studies on subjects that provide effective solutions in theory for the problems that we have raised. We know how to build protocols which satisfy the organization functions, we know how to configure behavioral policies amongst agents. However, there is still a lot of work to be done to move past theory and into practice. When we have a concrete case of a smart house with large amounts of information arriving at any time, the system must be able to process that data. From a practical point of view, we also need to answer fundamental questions about what a smart home should be for the user. Should they have control over absolutely everything or can we leave the decisions to agents without user control?

Is it realistic to consider the control being taken away from the occupant?  

GP: We have to understand that we aren’t dealing here with neural networks which make decisions like black boxes.  In the case of the multi-agent approach, there is a history of the decisions of the agent, with the plan that it puts in place to reach that decision, and the reasons for creating the plan.  So even if the decision is left to the agent, that doesn’t mean that the user won’t know how it came about. There is still a control mechanism, and the user can change their preferences if they need to. It’s not as if the agent decides without the user having any opportunity to know what it is doing.

OB: It’s an AI approach which is different to what people imagine artificial intelligence being. It is not yet as well known as the learning approach. Decentralized AI is still difficult for the general public to understand but there are now more and more uses for the technology, which means that it’s becoming increasingly necessary. 20 years ago, systems often had a centralized solution.  Today, notably with the development of the IoT (Internet of Things), decentralization is an obligation and decentralized AI is recognized as being the most logical solution for uses such as smart homes or Smart Cities.

large retailers

AI lends a hand to help large retailers win back their customers

Large retailers are in search of tools to help them improve the buying experience in their stores and compete more effectively with online shopping. From intelligent guidance for customers in overcrowded supermarkets to optimized selection of the products on the shelves, researchers Marin Lujak and Arnaud Doniec from IMT Lille Douai and Jacky Montmain from IMT Mines Alès are using artificial intelligence to offer a customized experience to clients.

This article is part of our dossier “Far from fantasy: the AI technologies which really affect us.

It’s late Saturday morning and Mrs. Little enters her usual supermarket, eyes fixed on her watch. In front of her, the aisles are overrun with shopping carts overflowing with all different types of products.  She plunges into the crowd, weaving her way between the shoppers and dodging the promotional displays which block the middle of the aisles. Somehow, she manages to pick up two packs of water before fighting her way back to the other end of the store to get some dog food. As her cart becomes heavier, it becomes more difficult to maneuver. She leaves it at the end of the aisle and then wanders around the store, shopping list in hand, in search of cream, but without success.

Mrs. Little’s story is the story of every shopper who endures the ordeal of the supermarket every week. Today, consumers want to save time when they are shopping. The large retail industry is increasingly in competition with online businesses, and stores are focusing all their efforts on keeping their customers. What if one of the solutions was an intelligent consumer guidance system? “We could create an app which would automatically calculate the best possible route around the store, based on a customer’s shopping list, to reduce the amount of time that someone has to spend shopping,” suggest Marin Lujak and Arnaud Doniec, experts in artificial intelligence at IMT Lille-Douai.

Optimizing the route in a crowded supermarket

To provide guidance to customers in real time, the two researchers have devised a multi-agent system. This approach consists in developing distributed artificial intelligence which is made up of several small intelligent devices, called agents, that are distributed throughout the store and interact with each other. A collective intelligence then emerges from the sum of all these interactions.

In this architecture, fixed agents, in the form of proximity sensors or cameras that are linked to the store network, evaluate the density of customers per square meter in the aisles. Other agents installed on the customers’ smart phones use the client’s shopping list, the location of the items, and the current congestion levels to calculate the itinerary for the shortest overall journey around the store. If an aisle is congested, information is sent to the app, which then guides the customer towards another part of the store and makes them return to that aisle later. At the moment, this model is purely theoretical and must be developed to be applied to real cases. For example, product-related constraints could be added: starting with the heaviest or bulkiest products, for example, and finishing with the frozen food, etc.

Large retailers are showing a growing interest in the buying experience of their clients. However, even though this solution could improve the buying experience of a customer who is in a rush, it would also mean that the client would spend less time in the store. So, why would a store manager want to invest in this tool? For Marin Lujak and Arnaud Doniec, it’s a question of balance. “Each company can take ownership of the tool and include things such as alerts for promotional offers etc. We can also imagine that companies will be able to make the most of the app by guiding the client towards certain aisles according to their centers of interest.”

Proposing the right product to the right client

Spending less time in a store is good, but it’s even better if the customer finds all the products that they need. Another way to keep consumers is to get to know and adapt to their needs. Since 2010, a researcher at IMT Mines Alès, Jacky Montmain, has been collaborating with the company TRF Retail to develop supervision and diagnostic tools for product performance. “The tool is interesting for large retailers as it allows them to track the performance of a product, or a family of products, in an entire network of stores. It allows them to make comparisons and understand where a product is being sold or not, and why,” explains Montmain. The store manager is then free to adjust the range of products that they offer on their shelves according to this data.

When shops look at their clients, they look at the revenue they bring in before anything else. But how can you identify and distinguish what people are buying in an intelligent manner when a supermarket has between 100,000 and 150,000 products on sale? Jacky Montmain and PhD student Jocelyn Poncelet answer this question by establishing a product classification system. This tree structure classification is made up of five levels: the product, family of products, department, sub-category and category. For example, a fruit yogurt is part of the yogurt family in the fresh products department and the sub-category of dairy products, which itself is in the category of food. “By providing this intelligent breakdown, we can determine the consumption habits of customers and then compare them with each other. If we settle for comparing the customer till receipts, then a fruit yogurt will be as different from a natural yogurt as it is from a laundry detergent,” explains Jocelyn Poncelet.

Customer segmentation has proven its worth thanks to a field experiment conducted in 2 stores (a small shop and another which specializes in DIY). In time, the product classification, and therefore the algorithm for identifying buying habits, should be integrated into the product performance evaluation tool mentioned above. The aim is better stock management for large retailers, and optimized sourcing from suppliers. Finally, this classification will help to strike a balance between the best range of products offered by each store and the real needs of the customers who come there.

Article written for I’MTech by Anaïs Culot.

 

robot

Human-robot collaboration: Industrial utopia or tomorrow’s reality?

In the factories of the future, robots will not replace humans, but instead assist them. Researchers Sotiris Manitsaris from Mines ParisTech and Patrick Hénaff from Mines Nancy, are currently working on a control system design based on artificial intelligence, which can be used by all types of robot. But what is the aim of this type of AI? This technology aims to identify human actions and adapt to the pace of machine operators in an industrial context. Above all, knowledge about humans and their movement is the key to successful collaboration with machines.

This article is part of our dossier “Far from fantasy: the AI technologies which really affect us.

A robotic arm scrubs the bottom of a tank in perfect synchronization with the human hand next to it.  They’re moving at the same pace; a rhythm which is dictated by the way the operator moves.  Sometimes fast, then slightly slower, this harmony of movement is achieved through the artificial intelligence that is installed in the anthropomorphic robot. In the neighboring production area, a self-driving vehicle dances around the factory. It dodges every obstacle in its way, until it delivers the parts that it’s transporting to a manager on the production line.  In impeccable timing, the human operator retrieves the parts and then, once they have finished assembling them, leaves them on the transport tray of the small vehicle, which sets off immediately. During its journey, the machine passes several production areas, where humans and robots carry out their jobs “hand-in-hand”.

Even though anthropomorphic robots like robotic arms or small self-driving vehicles are already being used in some factories, they are not yet capable of collaborating with humans in this way.  Currently, robot manufacturers pre-integrate sensors and algorithms into the device. However, their future interactions with humans are not considered during their development.  “At the moment, there are a lot of situations where human operators and robots work side-by-side in factories, but don’t interact a lot together. This is because robots don’t understand humans when they’re in contact with them,” explains Sotiris Manitsaris, a specialized researcher in collaborative robotics at Mines ParisTech.

Human-robot collaboration, or cobotization, is an emerging field in robotics which redefines the function of robots as working “with” and not “instead of” humans. By neutralizing human and robotic weaknesses with the assets of the other, this approach allows factory productivity to increase, whilst still retaining jobs.  The human workers bring flexibility, dexterity and decision making, whilst the robots bring efficiency, speed and precision.  But to be able to collaborate properly, robots have to be flexible, interactive and, above all, intelligent.  “Robotics is the tangible aspect of artificial intelligence. It allows AI to act on the outside world with a constant perception-action loop. Without this, the robot would not be able to operate,” says Patrick Hénaff, specialist in bio-inspired artificial intelligence at Mines Nancy. From the automotive to the fashion and luxury goods industries, all sectors are interested in integrating robotic collaboration.

Towards a Successful Partnership Focused on Human Action.

Beyond direct interaction between humans and machines, the entire production cycle could become more flexible. This would depend more on the operator’s pace and the way that they work. “The robot has to respond to the needs of humans but also anticipate their behavior. This allows them to adapt dynamically,” explains Sotiris Manitsaris. For example, on an assembly line in the automotive industry, each task is carried out in a specific time.  If the robot anticipates the operator’s movements, then it can also adapt to their speed.  This issue has been the focus of work with PSA Peugeot Citroën as part of the chair in Robotics and Virtual Reality at Mines ParisTech. So far, researchers have been able to put in place the first promising human-robot collaborations. In this collaboration, which took place on a work bench, a robot brought parts depending on the execution speed of the operator. The operator then assembled them and screwed them together, before giving them back to the robot.

Read on I’MTech: The future of production systems, between customization and sustainable development

Another aim of cobotics is to alleviate human operators of difficult tasks. As part of the Horizon 2020 Framework launched at the end of 2018, Sotiris Manitsaris has tackled the development of ergonomic gesture recognition technologies and the communication of this information to robots.  To do this, first of all, the gestures are recorded with the help of communicating objects (smart watch, smart phone, etc.) which the operator wears.  The gestures are then learned by artificial intelligence. These new models of collaboration, which are centered around humans and their actions, are conceptualized so they can be implemented on any robotic model. From now on, once the movement is recognized, the question is knowing what information to communicate to the robot. This is so it can adapt its behavior without affecting its own performance, nor the performance of the human collaborator.

Rhythmic Collaboration

Understanding movements and implementing them in robots is also central to the work conducted by Patrick Hénaff.  His latest work uses an approach inspired by neurobiology and is based on knowledge of animal motor systems. “We can consider artificial intelligence as being made up of a high-level structure, the brain, and of lower-level intelligence which can be dedicated to movement control without needing to receive higher-level information permanently,” Hénaff explains. More particularly, this research deals with rhythmic gestures, in other words, with automatic movements which are not ordered by our brain. Instead, these gestures are commanded by our neural networks, located in our spinal cord.  For example, in instances such as walking, or wiping a surface with a sponge.

Once the action is initiated by the brain, a rhythmic movement occurs naturally and at a pace which is dictated by our morphology. However, it has been demonstrated that for some of these gestures, the human body is able to synchronize naturally with external (visual or aural) signals which are equally rhythmic.  For example, this happens when two people walk together. “In our algorithms, we try to determine which external signals we need to integrate into our equations. This is so that a machine synchronizes with either humans or its environment when it carries out a rhythmic gesture,” describes Patrick Hénaff.

From the Laboratory to the Factory: Only One Step.

In the laboratory, researchers have demonstrated that robots can carry out rhythmic tasks without physical contact.  With the help of a camera, a robot observes the hand gestures of a person who is saying hello and can then reproduce it at the same pace and synchronize itself with the gesture. The experiments were also carried out on an interaction with contact – a handshake.  The robot learns the way to hold a human hand and synchronizes its movement with the person opposite them.

In an industrial setting, an operator carries out numerous rhythmic gestures, such as sawing a pipe, scrubbing the bottom of a tank, or even polishing a surface. To carry out tasks in cooperation with an operator, the robot has to be able to reproduce its movements.  For example, if a robot saws a pipe with a human, then the machine must adapt its rhythm so that it does not cause musculoskeletal disorders.  “We have just launched a partnership with a factory in order to carry out a proof of concept. This will demonstrate that new generation robots can carry out, in a professional environment, a rhythmic task which doesn’t need a precise trajectory but in which the final result is correct,” describes Patrick Hénaff. Now, researchers want to tackle dangerous environments and the most arduous tasks for operators, not with the aim of replacing them, but helping them work “hand-in-hand”.

Article written for I’MTech by Anaïs Culot