visual impairments

Virtual reality improving the comfort of people with visual impairments

People suffering from glaucoma or retinitis pigmentosa develop increased sensitivity to light and gradually lose their peripheral vision. These two symptoms cause discomfort in everyday life and limit the social activity of the people affected. The AUREVI research project involving IMT Mines Alès aims to improve the quality of life of visually-impaired people with the help of a virtual reality headset.

 

Retinitis pigmentosa and glaucoma are degenerative diseases of the eye. While they have different causes, they result in similar symptoms: increased sensitivity to changes in light and gradual loss of peripheral vision. The AUREVI research project was launched in 2013 in order to help overcome these deficiencies. Over 6 years, the project has brought together researchers from IMT Mines Alès and Institut ARAMAV in Nîmes, which specializes in rehabilitation and physiotherapy for people with visual impairments. Together, the two centers are developing a virtual reality-based solution to improve the daily lives of patients with retinitis pigmentosa or glaucoma.

“For these patients, any light source can cause discomfort” explains Isabelle Marc, a researcher in image processing at IMT Mines Alès, working on the AUREVI project. A computer screen, for example, can dazzle them. When walking around outdoors, the changes in light between shady and bright areas, or even breaks in the clouds, can be violently dazzling. “For visually impaired people, it takes much longer for the eye to adjust to different levels of light than it does for healthy people” the researcher adds. “While it usually takes a few seconds before we can open our eyes after being dazzled or to be able to see better in a shady area, these patients need several tens of seconds, sometimes several minutes.

Controlling light

With the help of a virtual reality headset, the AUREVI team offers greater control over light levels for visually impaired people with retinitis or glaucoma. Cameras display the image which would normally be seen by the eyes on the screens of the headset. When there is a sudden change in the light, image processing algorithms alter the brightness of the image in order to keep it constant in the patient’s eyes. For the researchers, the main difficulty with this tool is the delay. “We would like it to be effective in real time. We are aiming for the shortest delay between what appears on the screen of the headset and what the user really sees” says Isabelle Marc. The team is therefore using logarithmic cameras, which record HDR (High Dynamic Range) images directly, thus reducing the processing time.

The headset is designed to replace the dark glasses usually worn by people with this type of pathology. “It’s a pair of adaptive dark sunglasses. The shade varies pixel by pixel” Isabelle Marc explains. An advantage of this tool is that it can be calibrated to suit each patient. Depending on the stage of the retinitis or glaucoma, the level of sensitivity will be different. This can be accounted for in the way the images are processed. To do so, scientists have developed a specific test for evaluating the degree of light a person can bear. “This test could be used to configure the tool and adapt it optimally for each user” says the researcher.

The first clinical trials of the headset began on fifteen people in 2016. The initial goal was to measure the light levels considered as comfortable by each person, and to gather feedback on the comfort of the tool, before looking at evaluating the service provided to people with visual impairments. For this, the researchers create variations in the brightness of a screen for patients wearing a headset, who then give their feedback. Isabelle Marc reports that “the initial feedback from patients shows that they prefer the headset over other tools for controlling light levels”. However, the testers also commented on the bulk of the tool. “For now, we are working with the large headsets available on the market, which are not designed to be worn when you are walking around” the researcher concedes. “We are currently looking for industrial partners who could help us make the shift to a pre-series prototype more suitable for walking around with.

Showing what patients can’t see

Being able to control light levels is a major improvement in terms of visual comfort for patients, but the researchers want to take things even further. The AUREVI project aims to compensate for another symptom caused by glaucoma and retinitis: loss of stereo vision. Patients gradually lose degrees in their visual field, reaching 2 or 3 degrees at around 60 years old, or even full blindness. Before this last stage comes an important step in the progression of the handicap, as Isabelle Marc describes: “Once the vision goes below 20 degrees of the visual field, the images in the eye no longer cross over, and the brain cannot reconstruct the 3D information.

Par des techniques de traitement d'image, le projet AUREVI veut donner aux personnes malvoyantes des indications sur les obstacles à proximité.

Using image processing techniques, the AUREVI project hopes to give people with visual impairments indications about nearby objects

 

Without stereoscopic vision, the patient can no longer perceive depth of field. One of the future steps of the project will be to incorporate a feature into the headset to compensate for this deficiency in three-dimensional vision. The researchers are working on methods for communicating information on depth. They are currently looking at the idea of displaying color codes. A close object would be colored red, for example, and a far object blue. As well as improving comfort, this feature would also provide greater safety. Patients suffering from an advanced stage of glaucoma or retinitis do not see objects above their head which could hurt them, nor do they see those at their feet which are a tripping hazard.

Losing information about their surroundings gives people with visual impairments the feeling of being in danger, which increases as the symptoms get worse. Combined with an increasing discomfort with changes in light levels, this fear can often lead to social exclusion. “Patients tend to go out less, especially outdoors” notes Isabelle Marc. “On a professional level, their refusal to participate in activities outside of work with their colleagues is often misunderstood. They may still have good sight for reading, for example, and so people with normal sight may have a hard time understanding their handicap. Therefore, their social circle gradually shrinks. The headset developed by the AUREVI project is an opportunity to improve social integration for people with visual impairments. For this reason, it receives financial support from several companies as part of their disabilities and diversity missions, in particular: Sopra Steria, Orano, Crédit Agricole and Thalès. The researchers rely on this aid for people with disabilities in developing their project.

Qualcomm

Qualcomm, EURECOM and IMT joining forces to prepare the 5G of the future

Belles histoires, Bouton, Carnot5G is moving into the second phase of its development, which will bring a whole host of new technological challenges and innovations. Research and industry stakeholders are working hard to address the challenges posed by the next generation of mobile communication. In this context, Qualcomm, EURECOM and IMT recently signed a partnership agreement also including France Brevets. What is the goal of the partnership? To better prepare 5G standards and release technologies from laboratories as quickly as possible. Raymond Knopp, a researcher in communication systems at EURECOM, presents the content and challenges of this collaboration.

 

What do you gain from the partnership with Qualcomm and France Brevets?

Raymond Knopp: As researchers, we work together on 5G technologies. In particular, we are interested in those which are closely examined by 3GPP, the international standards organization for telecommunication technologies. In order to apply our research outside our laboratories, many of our projects are carried out in collaboration with industrial partners. This gives us more relevance in dealing with the real-world problems facing technology. Qualcomm is one of these industrial partners and is one of the most important companies in the generation of intellectual property in 4G and 5G systems. In my view, it is also one of the most innovative in the field. The partnership with Qualcomm gives us a more direct impact on technology development. With additional support from France Brevets, we can play a more significant role in defining the standards for 5G. We have a lot to learn from the intellectual property generation, and these partners provide us with this knowledge.

What technologies are involved in the partnership?

RK: 5G is currently moving into its second phase. The first phase was aimed at introducing new network architecture aspects and new frequencies. This meant increasing the frequency bands by about 5 or 6 times. This phase is now operational, and so innovations are secondary. The technologies we are working on now are mainly for the second phase. It is oriented more towards private networks, for applications involving machines and vehicles, new network control systems, etc. Priority will be given to network division and software-defined network (SDN) technologies, for example. This is also the phase in which low latency and very highly robust communication will be developed. This is the type of technology we are working on under this partnership.

Are you already thinking of the implementation of the technologies developed in this second phase?

RK: For now, our work on implementation is very much aimed at the first-phase technologies. We are involved in the H2020, 5Genesis and 5G-Eve projects, for conducting tests on 5G, both for mobile terminals and the network side of things. These trials involve our platform OpenAirInterface. For now, the implementation of second-phase technologies is not a priority. Nevertheless, intellectual property and any standards generated in the partnership with Qualcomm could potentially undergo implementation tests on our platform. However, it will be some time before we reach that stage.

What does a partnership with an industrial group like this represent for an academic researcher like yourself?

RK: It is an opportunity to close the loop between research, prototyping, standards and industrialization, and to see our work applied directly to the 5G technologies we will be using tomorrow. In the academic world in general, we tend to be uni-directional. We write publications, and some of them contain issues that could be included in standards, but this isn’t done and they are left accessible to everyone. Of course, companies go on to use them without our involvement, which is a pity. By setting up partnerships like this one with Qualcomm, we learn to appreciate the value of our technologies and developing them together. I hope it will encourage more researchers to the same. The field of academic research in France needs to be aware of the importance of closely following the standards and industrialization process!

 

platforms

Another type of platform is possible: a cooperative approach

Article written in partnership with The Conversation France
By Mélissa Boudes (Institut Mines-Télécom Business School), Guillaume Compain (Université Paris Dauphine – PSL), Müge Ozman (Institut Mines-Télécom Business School)

[divider style=”normal” top=”20″ bottom=”20″]

[dropcap]S[/dropcap]o-called collaborative platforms have been very popular since their appearance in the late 2000s. They are now heavily criticized, driving some of their users to take collective action. There is growing concern surrounding the use of personal data, but also the ethics of algorithms. Besides their technological functioning, the broader socio-economic model of these platforms is hotly debated. They are designed to generate value for their users by organizing peer-to-peer transactions. Some of the more dominant platforms charge high fees for their role as an intermediary. The platforms are also accused of dodging labor laws, with their high use of independent workers, practicing tax optimization or contributing to the growing commodification of our everyday lives.

From collaboration to cooperation

Though it is easy to criticize, creating alternatives is far more complicated. However, some initiatives are emerging. The international movement towards more cooperative platforms, launched in 2014 by Trebor Scholz at the New School in New York, promotes the creation of more ethical, fairer platforms. The idea is simple: why would platform users delegate intermediation to third-party companies which gain from the economic value of their exchanges when they could manage the platforms themselves?

The solution would be to adopt a cooperative model. In other words, to create platforms that are owned by their users and apply a democratic operating model, in which each co-owner has a voice, independent of their contribution of capital. In addition, an obligation to reinvest a proportion of the profit into the project, with no way of making a capital gain by selling shares, thus avoiding financial speculation.

Many experiments are underway around the world. For instance, Fairmondo, a German marketplace for fair trade products, allows users a share in the cooperative. Though not exhaustive, the list drawn up by the Platform Cooperativism Consortium gives an overview of the scope of the movement.

Although the creators of cooperative platforms are willing to create alternatives to a concentrated, or even oligopolistic platform economy in some sectors, they come up against many challenges, particularly in terms of governance, economic models and technological infrastructure.

Many challenges

Based on our work on action research in the French network of cooperative platforms, Plateformes en communs, and an analysis of various foreign cases, we have identified a number of characteristics and limitations of alternative platforms.

Fairmondo, a German marketplace for fair trade products. Screenshot.

 

While they share a common opposition to major commercial platforms, there is no typical model for cooperative platforms, rather a multitude of experiments which are still in their early stages, with very different structures and modes of operation. Some were a natural progression from the movement against uberization, like Coopcycle, while others were created by digital entrepreneurs searching for meaning, or by modernized social and solidarity economy organizations (ESS).

There are many challenges for these cooperative platforms, which have high social and economic ambitions and do not have pre-defined futures. Here we will focus on three major challenges: finding long-lasting economic and financial models, uniting communities, mobilizing supporters and partners.

Making economic models durable

In a highly competitive context, there is no margin for error for alternative platforms. To attract users, they have to offer high-quality services, including an exhaustive offering, efficient contact, simple use, and attractive aesthetics. However, it is difficult for cooperative platforms to attract investors, as being cooperatives or associations, they are generally not particularly lucrative. In addition, some opt to open up their assets, allowing open access to their computer code, for instance.

But while the creators of alternative digital platforms are entrepreneurs, their economic models remain more of an iteration than a business plan. Many cooperative platforms, still in the developmental stages, rely primarily on voluntary work (made possible by external income: second jobs, personal savings, unemployment benefits, social welfare payments) which may run out if the platform does not manage to create salaries and/or attract new contributors.

Creating a community

The importance of creating a committed community to support the platform is primordial, both for its daily operations and its development, especially given that the economy of platforms relies on network effects: the more people or organizations a platform brings together, the more new ones it will also attract, as it will offer great opportunities to its users. It is therefore difficult for alternative platforms to penetrate sectors where there are already dominant actors.

Cooperative platforms try to differentiate themselves by creating communities which have input into the way the platform is run. Some, like Open Food France, specializing in local food distribution networks, have gone as far as broadening their community of cooperators to include public and private partners, and end consumers. This gives them a way to express their societal aspirations through their economic choices.

The founders of Oiseaux de passage, a cooperative platform offering local tourism services, also opted for a broader view of membership. They chose the legal status of Société coopérative d’intérêt collectif (a collective-interest cooperative), enabling several categories of stakeholders (tourism professionals, inhabitants, tourists) to hold shares in a collective company.

These cooperative platforms thus adopt an ecosystem-based approach, including all stakeholders that are naturally drawn to them. However, for the moment, user commitment remains low and project leaders are often overworked.

Stopping the movement being hijacked

Cooperative platforms are still in their youth, and struggle to gain the support they so desperately need. Financially speaking, their unstable models are insufficient in attracting public organizations and ESSs, which prefer to work with more stable, profitable commercial platforms. The other obstacle is political in nature. In the fight against uberization, cooperative platforms present themselves as alternatives, whereas for the time being, public authorities seem to favor social dialog with the dominant platforms.

Cooperative platforms are almost left to their own devices, compensating for the lack of support by trying to join forces though a peer network, such as the Platform Cooperativism Consortium on an international scale, or the Plateformes en Communs in France. By uniting together, cooperative platforms have managed to attract media attention, but also attention from one of their most symbolic “enemies”. In May 2018, the Platform Cooperativism Consortium announced that it had received a $1 million dollar grant from… the Google Foundation. A grant aimed essentially at supporting the creation of cooperative platforms in developing countries.

Naturally, the announcement created quite a stir in the movement, some people condemning a symbolically unacceptable contradiction, others expressing concern that the model might be appropriated by Google. In any case, this event highlights the lack of support for the movement, pushed into signing agreements which go against its very nature.

It therefore seems essential to the survival of cooperative platforms, and the general existence of alternatives to the platforms which are currently crushing the market, for public institutions and ESS structures to actively support developing projects. For example, through financing measures (especially venture capital), specialized support structures, commercial partnerships, equity participation, or even joint construction of platforms based on local needs. Without political input and innovation in practices, domination by global platforms without sharing seems inevitable.

[divider style=”normal” top=”20″ bottom=”20″]

Mélissa Boudes, Associate Professor of Management, Institut Mines-Télécom Business School, Guillaume Compain, Doctoral student in Sociology, Université Paris Dauphine – PSL and Müge Ozman, Professor of Management, Institut Mines-Télécom Business School ;

This article has been republished from The Conversation under a Creative Commons license. Read the original article (in French).

IA, AI

The unintelligence of artificial intelligence

Despite the significant advances made by artificial intelligence, it still struggles to copy human intelligence. Artificial intelligence remains focused on performing tasks, without understanding the meaning of its actions, and its limitations are therefore evident in the event of a change of context or when the time comes to scale up. Jean-Louis Dessalles outlines these problems in his latest book entitled Des intelligences très artificielles (Very Artificial Intelligence). The Télécom Paris researcher also suggests avenues for investigation into creating truly intelligent artificial intelligence. He presents some of his ideas in the following interview with I’MTech.

 

Can artificial intelligence (AI) understand what it is doing?

Jean-Louis Dessalles: It has happened, yes. It was the case with the SHRDLU program, for example, an invention by Terry Winograd during his thesis at MIT, in 1970. The program simulated a robot that could stack blocks and speak about what it was doing at the same time. It was incredible, because it was able to justify its actions. After making a stack, the researchers could ask it why it had moved the green block, which they had not asked it to move. SHRDLU would reply that it was to make space in order to move the blocks around more easily. This was almost 50 years ago and has remained one of the rare isolated cases of programs capable of understanding their own actions. These days, the majority of AI programs cannot explain what they are doing.

Why is this an isolated case?

JLD: SHRDLU was very good at explaining how it stacked blocks in a virtual world of cubes and pyramids. When the researchers wanted to scale the program up to a more complex world, it was considerably less effective. This type of AI became something which was able to carry out a given task but was unable to understand it. Recently, IBM released Project Debater, an AI program that can debate in speech competitions. It is very impressive, but if we analyze what the program is doing, we realize it understands very little. The program searches the Internet, extracts phrases which are logically linked, and puts them together in an argument. When the audience sees listens, it has the illusion of a logical construction, but it is simply a compilation of phrases from a superficial analysis. The AI program doesn’t understand the meaning of what it says.

IBM’s Project Debater speaking on the statement “We should subsidize preschools”

 

Does it matter whether AI understands, as long as it is effective?

JLD: Systems that don’t understand end up making mistakes that humans wouldn’t make. Automated translation systems are extremely useful, for example. However, they can make mistakes on simple words because they do not understand implicit meaning, when even a child could understand the meaning due to the context. The AI behind these programs is very effective as long as it remains within a given context, like SHRDLU. As soon as you put it into an everyday life situation, when you need it to consider context, it turns out to be limited because it does not understand the meaning of what we are asking it.

Are you saying that artificial intelligence is not intelligent?

JLD: There are two fundamental, opposing visions of AI these days. On the one hand, a primarily American view which focuses on performance, on the other hand, Turing’s view that if an AI program cannot explain what it is doing or interact with me, I will not call it “intelligent”. From a utilitarian point of view, the first vision is successful in many ways, but it comes up against major limitations, especially in problem-solving. Take the example of a connected building or house. AI can make optimal decisions, but if the decisions are incomprehensible to humans, they will consider the AI stupid. We want machines to be able to think sequentially, like us: I want to do this, so I have to change that; and if that creates another problem, I will then change something else. The machine’s multi-criteria optimization sets all the parameters at the same time, which is incomprehensible to us. It will certainly be effective, but ultimately the human will be the one judging whether the decision made is appropriate or not, according to their values and preferences, including their will to understand the decision.

Why can’t a machine understand the meaning of the actions we ask of it?

JLD: Most of today’s AI programs are based on digital techniques, which do not incorporate the issue of representation. If I have a problem, I set the parameters and variables, and the neural network gives me a result based on a calculation I cannot understand. There is no way of incorporating concepts or meaning. There is also work being done on ontologies. Meaning is represented in the form of preconceived structures where everything is explicit: a particular idea or concept will be paired with a linguistic entity. For example, to give a machine the meaning of the word “marriage”, we will associate it with a conceptual description based on a link between a person A and a person B, and the machine will discover for itself that there is a geographical proximity between these two people, they live in the same place, etc. Personally, I don’t believe that ontologies will bring us closer to an AI which understands what it is doing, and thus one that is truly intelligent under Turing’s definition.

What do you think is the limitation of ontologies?

JLD: They too have difficulty being scaled up. For the example of marriage, the challenge lies in giving the machine the full range of meaning that humans attribute to this concept. Depending on an individual’s values and beliefs, their idea of marriage will differ. Making AI understand this requires constructing representations that are complex, sometimes too complex. Humans understand a concept and its subtleties very quickly, with very little initial description. Nobody spends hours on end teaching a child what a cat is. The child does it alone by observing just a few cats and finding the common point between them. For this, we use special cognitive mechanisms including looking for simplicity, which enables us to reconstruct the missing part of a half-hidden object, or to understand the meaning of a word which can have several different meanings.

What does AI lack in order to be truly intelligent and acquire this implicit knowledge?

JLD: Self-observation requires contrast, which is something AI lacks. The meaning of words changes with time and depending on the situation. If I say to you: “put this in the closet”, you will know which piece of furniture to turn to, even though the closet in your office and the one in your bedroom do not look alike, neither in their shape or in what they contain. This is what allows us to understand very vague concepts like the word “big”. I can talk about “big bacteria” or a “big galaxy” and you will understand me, because you know that the word “big” does not have an absolute meaning. It is based on a contrast between the designated object and the typical corresponding object, depending on the context. Machines do not yet know how to do this. They would recognize the word “big” as a characteristic of the galaxy, but using “big” to describe bacteria would make no sense to it, for example. They need to be able to make contrasts.

Is this feasible?

JLD: Quite likely, yes. But we would have to augment digital techniques to do so. AI designers are light years away from being able to address this type of question. What they want to figure out, is how to improve the performance of their multi-layer neural networks. They do not see the point of striving towards human intelligence. IBM’s Project Debater is a perfect illustration of this: it is above all about classification, with no ability to make contrasts. On the face of it, it is very impressive, but it is not as powerful as human intelligence, with its cognitive mechanisms for extrapolating and making arguments. The IBM program contrasts phrases according to the words they contain, while we contrast them based on the ideas they express. In order to be truly intelligent, AI will need to go beyond simple classification and attempt to reproduce, instead of mime, our cognitive mechanisms.

 

sable, sand

Sand, an increasingly scarce resource that needs to be replaced

Humans are big consumers of sand, to the extent that this now valuable resource is becoming increasingly scarce. Being in such high demand, it is extracted in conditions that aren’t always respectful of the environment. With the increasing scarcity of sand and the sometimes devastating consequences of mining at beaches, it is becoming crucial to find alternatives. Isabelle Cojan and Nor-Edine Abriak, researchers in geoscience and geomaterials at Mines ParisTech and IMT Lille Douai respectively, explain the stakes involved with regards this resource.

 

“After air and water, sand is the third most essential resource for human beings” explains Isabelle Cojan, researcher in geoscience at Mines ParisTech. Sand is, of course, used indirectly by most of us, since the major part of this resource is consumed by the construction industry. 15 to 20 billion tons of sand are used every year in the construction of buildings and roads all over the world and in land reclamation. In comparison, the amount used in other applications, such as micro-computing or glass, detergent and cosmetics manufacturing, is around 0.2 billion tons. On an individual scale, global sand consumption stands at 18 kg per person per day.

At first glance, our planet could easily provide the enormous amount of sand required by humanity. A quarter of the surface of the Earth’s continents is covered by huge deserts of which nearly 20% of the surface is occupied by dune fields. The problem is that this aeolian sand is unusable for construction or reclamation: “The grains of desert sand are too smooth, too fine, and cannot bind with the cements to make concrete,” explains Isabelle Cojan. For other reasons, including economic ones, marine sands can only be exploited at shallow depths, as is currently the case in the North Sea. The volume of the Earth’s available sand is significantly smaller once these areas are removed from the equation.

Deserts are poor suppliers of construction sand. The United Arab Emirates is therefore paradoxically forced to import sand to support the development of its capital, Dubai, which is located just a stone’s throw from the immense desert of the Arabian Peninsula.

 

Deserts are poor suppliers of construction sand. The United Arab Emirates is therefore paradoxically forced to import sand to support the development of its capital, Dubai, which is located just a stone’s throw from the immense desert of the Arabian Peninsula.

So where do we find this precious sediment? Silica-rich sands, such as those found under the Fontainebleau forest, are reserved for the production of glass and silicon. On the other hand, the exploitation of fossil deposits is limited by the anthropization of certain regions, which makes it difficult to open quarries. For the construction industry, the sands of rivers and coastal deposits fed partly by river flow must be used. The grains of alluvium from beaches are angular enough to cling to cements. Although the quantities of sand on beaches may seem huge to the human eye, they are not really that big. In fact, most of the sediments in the alluvial plains of our rivers and coastlines are inherited from the large-scale erosion that occurred during the Quaternary Ice Age. Today, with building developments along riverbanks and less erosion, Isabelle Cojan points out that “we consume twice as much sand as the amount the rivers bring to the coastal regions.”

Dams not only retain water, they also prevent sediment from descending downstream. The researcher at Mines ParisTech points to the example of the watercourse of the Durance, a tributary of the Rhône that she studied: “Before development, it deposited 3 million tons of sediment in the Mediterranean every year. Today, this quantity is between 0.1 and 0.5 million tons.” Most of the sediment provided by the erosion of the watershed is trapped by infrastructure, and settles to the bottom of artificial water reservoirs.

The sand rush and its environmental impact

As a result of the lower release of sediment into the seas and oceans, some beaches are naturally shrinking, especially where the extraction industry takes sand from the coasts. Countries are thus in a critical situation where entire beaches are disappearing. This is the case in Togo, for example, and several other African countries where sand is extracted without too many legislative constraints. “The Comoros derives a lot of its income from tourism, and is extracting more and more sand from beaches to support its economic development because there is no significant reserve of sand elsewhere on the islands,” explains Isabelle Cojan. The extraction of large volumes of sand leads to coastal erosion and land receding to the sea. The situation is similar in other parts of the world. “Singapore has significantly increased its surface area by developing polders in the sea,” the researcher continues. “Nearby islands at the surface of the water providing an easy supply of sand disappeared before the regulations of the countries concerned prohibited such extraction.

In Europe, practices are more closely controlled. In France, in particular, extraction from riverbeds – a practice which dates back thousands of years – was carried out until the 1970s. The poorly regulated exploitation of alluvial deposits led to changes in river profiles, leading to the scouring of bridge anchorages, which then became fragile and sometimes even collapsed. Extraction from rivers has since been forbidden, and only deposits on alluvial plains that do not directly impact riverbeds may be used for extraction. On the coast, exploitation is regulated by the French Environmental Code and Mining Code, which prohibit any extraction that could directly or indirectly compromise the integrity of a beach.

However, despite this legislation, indirect adverse consequences are difficult to prevent. The impact of sand extraction in the medium term is complex for researchers to model. “In coastal areas, we have to take account of a set of complex processes linked to tides, storms, coastal drift, vegetation cover, tourism and port facilities lists Isabelle Cojan. In some cases, sustainable extraction entails no danger. In other situations, a slight change in the profile of the beach could have serious consequences. “This can lead to a significant retreat of the coastline during storms and flooding of the hinterland by marine waters, especially during equinox storms,” the researcher continues. On coastline that is undergoing natural erosion with low-relief hinterlands, sand extraction may, over time, lead to an irreversible destabilization of the coast.

The implications of the disappearance of beaches are not only aesthetic. Beaches and the dunes that very often lie along their edge constitute a natural sedimentary barrier against the onslaught of the waves. They are the primary and most important source of protection against erosion. Beaches limit, for example, the retreat of chalk cliffs. On coasts with low landforms, beach-dune systems form a barrier against the entry of the sea into the land. When water breaches the natural sediment barrier, salinization of fields can occur leading to a drastic change in farming conditions and can ruin arable land.

What are the alternatives to sand?

Faced with the scarcity of this resource, new avenues are being explored to find an alternative to sand. Recycled concrete, glass or metallurgical waste can be used to replace sediment in the composition of concrete. However, building materials produced in this way encounter performance problems: “They age very quickly and can release pollutants over time,” explains Isabelle Cojan. Another limitation is that these alternative options are currently not sufficient in volume. France produces 370 million tons of sand annually, whereas recycling only produces 20 million tons.

Also read on I’MTech Recycling concrete and sediment to create new materials

Major efforts to structure a dedicated recycling sector would be necessary, with all the economic and political debates at national and local levels that this implies. Since manufacturers want high-performance products, this could not be done until research has found a way to limit the aging and pollution of materials made from recycled materials. While recycling should not be ruled out, it is clear that the solution is only feasible over a relatively long time scale.

In the shorter term, another alternative could come from the use of other sand deposits currently considered as waste. Through pioneering work in geomaterials at IMT Lille Douai, Nor-Edine Abriak has demonstrated that it is possible to exploit dredging sand. This sediment comes from the bottom of rivers and streams, and is extracted for water course development. Dredging is mainly used to allow waterway navigation and large quantities of sand are extracted every year from ports and river mouths. “When I started my research on the subject a few years ago, the port of Dunkirk was very congested with sediments,” recalls the researcher. He joined forces with the local authorities to set up a research chair called Ecosed in order to find a way to use this sand.

For the construction industry, the major drawback of dredging sediments is their high content in clay particles. “Clay is a nightmare for people working with concrete,” warns Nor-Edine Abriak. “The particles can swell, increasing the setting time of the cement in the concrete and potentially diminishing the performance of the final material.” It is customary to use a sieve to separate the clay, which requires equipment and dedicated logistics, and therefore additional costs. For this reason, these sediments are rejected by industrial players. “The only way to be competitive with these sediments is to be able to use them as they are, without a separation process,” admits the leader of the Ecosed Chair. The research at IMT Lille Douai has led to a more convenient and rapid treatment process using lime that eliminates the need for sieving. This process also breaks down the organic matter in the sediments and improves the setting of the cement, making it possible to rapidly use the sand extracted from the bottom of the port of Dunkirk.

The Ecosed Chair has also provided a way to overcome another problem, that of the salinity of these shallow sands in contact with seawater. Salt corrodes materials, therefore shortening the useful life of concrete. To remove it, the researchers used a simple water wash. “We showed that the dredging sand could simply be stored in large lagoons and the salt would be drained by the rain,” explains Nor-Edine Abriak. The solution is a simple one, provided there is enough space nearby to build the lagoons, which is the case in the area around Dunkirk.

With these results, the team of scientists demonstrated that dredging sediments extracted from the bottom of ports could be used as an alternative to beach sediments, instead of being considered as waste. “We were the first in the world to prove this was possible,” Nor-Edine Abriak says proudly. This solution does not provide a full replacement, as dredged sediments have different mechanical properties that must be taken into account in order not to affect the durability of the materials. The first scaling tests showed that dredging sands could be used in proportions of up to 70% in the composition of materials for roads and 12% for buildings, with no loss of quality for the end material.

Because almost all ports have to carry out dredging, this research offers a major opportunity to reduce sand extraction from beaches and alluvium. In early March, the Ecosed team went to Morocco to launch a second chair on waste recovery and dredging sands in particular: “the same situation as in Dunkirk can be seen in Tangier and Agadir,” explains the researcher, highlighting the global nature of the problem. In June 2019, the Ecosed Chair in France became Ecosed Digital 4.0, going from a budget of €2 million to €24 million with the aim of structuring a specific sector for the recovery of dredged sediments in France. While this work alone will not fully solve the problem of sand scarcity, it will nevertheless start an impetus to reduce sand extraction in areas where such mining is threatening. It must also be ensured that this type of initiative is scaled up, both nationally and internationally.

lumière, intelligence artificielle, artificial intelligence, AI

Light, a possible solution for a sustainable AI

Maurizio Filippone, Professor at EURECOM, Institut Mines-Télécom (IMT)

[divider style=”normal” top=”20″ bottom=”20″]

We are currently witnessing a rapidly growing adoption of artificial intelligence (AI) in our everyday lives, which has the potential to translate into a variety of societal changes, including improvements to economy, better living conditions, easier access to education, well-being, and entertainment. Such a much anticipated future, however, is tainted with issues related to privacy, explainability, accountability, to name a few, that constitute a threat to the smooth adoption of AI, and which are at the center of various debates in the media.

A perhaps more worrying aspect is related to the fact that current AI technologies are completely unsustainable, and unless we act quickly, this will become the major obstacle to the wide adoption of artificial intelligence in society.

AI and Bayesian machine learning

But before diving into the issues of sustainability of AI, what is AI? AI aims at building artificial agents capable of sensing and reasoning about their environment, and ultimately learning by interacting with it. Machine Learning (ML) is an essential component of AI, which makes it possible to establish correlations and causal relationships among variables of interest from data and prior knowledge of the processes characterizing the agent’s environment.

For example, in life sciences, ML can be helpful to determine the relationship between grey matter volume and the progression of Alzheimer disease, whereas in environmental sciences it can be useful to estimate the effect of CO2 emissions on climate. One key aspect of some ML techniques, in particular Bayesian ML, is the possibility to do this by account for the uncertainty due to the lack of knowledge of the system, or the fact that a finite amount of data is available.

Such uncertainty is of fundamental importance in decision making when the cost associated with different outcomes is unbalanced. A couple of examples of domains where AI can be of tremendous help include a variety of medical scenarios (e.g., diagnosis, prognosis, personalised treatment), environmental sciences (e.g., climate, earthquake/tsunami), and policy making (e.g., traffic, tackling social inequality).

Unsustainable AI

Recent spectacular advances in ML have contributed to an unprecedented boost of interest in AI, which has triggered huge amounts of private funding into the domain (Google, Facebook, Amazon, Microsoft, OpenAI). All this is pushing the research in the field, but it is somehow disregarding its impact on the environment. The energy consumption of current computing devices is growing at an uncontrolled pace. It is estimated that within the next ten years the power consumption of computing devices will reach 60% of the total amount of energy that will be produced, and this will become completely unsustainable by 2040.

Recent studies show that the ICT industry today is generating approximately 2% of global CO₂ emissions, comparable to the worldwide aviation industry, but the sharp growth curve forecast for ICT-based emissions is truly alarming and far outpaces aviation. Because ML and AI are fast growing ICT disciplines, this is a worrying perspective. Recent studies show that the carbon footprint of training a famous ML model, called auto-encoder, can pollute as much as five cars in their lifetime.

If, in order to create better living conditions and improve our estimation of risk, we are impacting the environment to such a wide extent, we are bound to fail. What can we do to radically change this?

Let there be light

Transistor-based solutions to this problem are starting to appear. Google developed the Tensor Processing Unit (TPU) and made it available in 2018. TPUs offer much lower power consumption than GPUs and CPUs per unit of computation. But can we break away from transistor-based technology for computing with lower power and perhaps faster? The answer is yes! In the last couple of years, there have been attempts to exploit light for fast and low-power computations. Such solutions are somewhat rigid in the design of the hardware and are suitable for specific ML models, e.g., neural networks.

Interestingly, France is at the forefront in this, with hardware development from private funding and national funding for research to make this revolution a concrete possibility. The French company LightOn has recently developed a novel optics-based device, which they named Optical Processing Unit (OPU).

“Optical computing leading the AI scale-up”, Igor Carron, CEO, LightOn (CognitionX video, 2018).

 

In practice, OPUs perform a specific operation, which is a linear transformation of input vectors followed by a nonlinear transformation. Interestingly, this is done in hardware exploiting the properties of scattering of light, so that in practice these computations happen at the speed of light and with low power consumption. Moreover, it is possible to handle very large matrices (in the order of millions of rows and columns), which would be challenging with CPUs and GPUs. Due to the scattering of light, this linear transformation is the equivalent of a random projection, e.g. the transformation of the input data by a series of random numbers whose distribution can be characterized. Are random projections any useful? Surprisingly yes! A proof-of-concept that this can be useful to scale computations for some ML models (kernel machines, which are alternative to neural networks) has been reported here. Other ML models can also leverage random projections for prediction or change point detection in time series.

We believe this is a remarkable direction to make modern ML scalable and sustainable. The biggest challenge for the future, however, is how to rethink the design and the implementation of Bayesian ML models so as to be able to exploit the computations that OPUs offer. Only now we are starting developing the methodology needed to fully take advantage of this hardware for Bayesian ML. I’ve recently been awarded a French fellowship to make this happen.

It’s fascinating how light and randomness are not only pervasive in nature, they’re also mathematically useful for performing computations that can solve real problems.

[divider style=”normal” top=”20″ bottom=”20″]

Created in 2007 to help accelerate and share scientific knowledge on key societal issues, the Axa Research Fund has been supporting nearly 600 projects around the world conducted by researchers from 54 countries. To learn more, visit the site of the Axa Research Fund.The Conversation

Maurizio Filippone, Professor at EURECOM, Institut Mines-Télécom (IMT)

Cet article est republié à partir de The Conversation sous licence Creative Commons. Lire l’article original.

serious games

Serious games: when games invade the classroom

Over the last few years, a new teaching method linked to the invasion of digital technology in our daily lives has begun shaking up traditional learning methods. The primary purpose of these serious games is not entertainment. The developing sector does not seek to substitute, but rather supplement—or at least earn its place—in the arsenal of existing educational tools. Imed Boughzala, a researcher in management at Institut Mines-Télécom Business School offers a closer look at this phenomenon.

 

Video games are all the rage. According to SELL, a French organization promoting the interests of video game developers, in 2018, this market was estimated at nearly €5 billion and is steadily growing, with more and more people playing and consuming video games. In fact, the video game industry is now doing better than the book market. This clearly creates an opportunity for teachers to take advantage of this gaming culture and break away from traditional learning methods.

Discover the history of Ancient Egypt with Assassin’s Creed, use the popularity of a game like Fortnite to raise awareness about climate change or develop strategy skills with Civilization or Warcraft. While some teaching methods in France are beginning to adopt these games, research in this area remains limited. It is therefore difficult to assess the effectiveness of these new spaces for informal learning. Imed Boughzala first embarked on this adventure nearly 10 years ago:

“In 2008, while traveling in the United States as a guest professor with the Management Department at the University of Arkansas, I had the opportunity to create a distance learning course on information systems on a platform called Second Life, a platform that was ahead of its time and still exists. A few months later, the university campus had to close due to an avian flu outbreak. We therefore began to focus on implementing a completely virtualized training program. At the time, I designed a serious game for students stuck at home.”

Back in France after this experience, he continued his research on collaborative games. He led his research team, SMART² (Smart Business Information Systems) on a mission to pursue the digital transformation of organizations. “We began imagining how educational tools could be used to motivate students more and seeking a method for getting their attention. We began with the observation that a wide gap currently exists between the digital culture of young people and the university culture,” Imed Boughzala explains. In addition, when students play a role, there are many motivational factors involved, from moving from one level to the next, to receiving awards, and making a mistake and starting over immediately, testing and learning.

Playing for the sake of learning

But what do we really mean by a serious game? New digital practices that cover several key concepts: A serious game is a video game created for educational or practical purposes. Serious gaming is a broader concept that refers to the way certain games can be used as serious tools. Finally, gamification refers to adding a fun aspect to a serious subject.

For Imed Boughzala, it all started with a very practical situation. “At Institut Mines-Télécom Business School, 200 management students were enrolled in our program. Capturing their attention was very complicated when it came to very technical topics. The atmosphere in class and the exams were not always great. So why not have them play a game? By sheer coincidence, one day we came across a game by IBM. It was INNOV8, which aims to help future entrepreneurs develop certain computer and business skills,” the researcher explains.

Virtual worlds and serious games therefore helped the students tackle decision-making processes that are very real. “It was an immediate success, which led us to create a new, more customized scenario, and teach them how to create data patterns. Instead of doing exercises, this allowed them to play as much as necessary to understand how the tool behind the game is implemented. We therefore tried to take into account the technical aspects: how the game is played, how it is used and the specific context, that of Millennials,” the researcher explains.

Innovating through serious games

Does digital technology truly transform our relationship with knowledge? Is a good serious game worth more than a long speech? For Imed Boughzala, there’s no doubt about it. A fun game can simulate a real professional environment. Students become active participants in their learning as they are confronted with a problem, a dilemma they must resolve. “This is an important thing for a generation that quickly jumps from one thing to the next. We can try to fight against this reality, but it is quite clear: We can no longer teach the same way. We must add variety to our teaching outlines and add games to give everyone a breath of fresh air. That’s the true benefit.”

While it has now become necessary to use entertainment to reach training objectives, Imed Boughzala sought to link the development of these teachings to his research. He did this by focusing on the effectiveness and assessment of serious games in training programs. This is a complex subject because “we must distinguish between the performance perceived due to the format, content and presentation as a fun game, and the measurable assessment, the real performance. In other words, the knowledge related to professional activities that has actually been gained.”

The researcher is already convinced by the results of the gamification of certain educational processes, especially in learning complex procedures and in many different areas of management techniques, finance, city administration, sustainable development and healthcare and medicine. The immersive and interactive serious game also tests the student’s collective intelligence. For example, the Foldit project, an experimental video game created in 2008 on protein folding. Whereas scientists had spent 10 years searching for the three-dimensional structure of a protein of an AIDS virus in monkeys, the “players” were able to find a solution in three weeks, leading to the development of new antiretroviral drugs.

These practical cases can now be added to scientific databases made available to the scientific community. The institutional community is also beginning to recognize these realities, with The French Foundation for Management Education (FNEGE) creating a certification board to assess these new digital tools. This board will assess the educational added value of these tools, in other words, their ability to meet the defined learning goals. Solving puzzles, creating, experiencing, participating—serious games offer new was of learning and highlight the importance of variety.  Since video games can motivate users to become intensely involved for unprecedented period of time, their educational counterparts are completely appropriate for training purposes.

Article written for I’MTech by Anne-Sophie Boutaud

feu, fire

Fighting fire: from ancient Egypt to Notre-Dame de Paris

Article written in partnership with The Conversation France.
By Rodolphe Sonnier, IMT Mines Alès.
This article was co-authored by Clément Lacoste (IMT Mines Alès), Laurent Ferry (IMT Mines Alès) and Henri Vahabi (Université de Lorraine).The Conversation

[divider style=”normal” top=”20″ bottom=”20″]

[dropcap]T[/dropcap]he discovery of fire is often cited as the most important discovery in the history of mankind, given its major impact on the development of the Homo genus. By reducing the amount of energy required to digest food, cooking led to an increase in brain size. Fire seems to have been mastered approximately 400,000 years ago, although evidence of its use much earlier has been found. However, with urbanization, fire has also become a serious problem when it spreads uncontrollably. Examples include the great fire of Rome in the year 64 AD or the recent fire at the Notre Dame de Paris Cathedral.

What is fire?

A fire requires a combination of three elements: a fuel source, an oxidizer and a heat source. This combination of elements is called the fire triangle. These elements interact through a complex process involving physical phenomena, such as heat transfer, and chemical phenomena, such as pyrolysis of the fuel source and combustion of the pyrolysis products.

Technically, a distinction is made between reaction to fire and fire resistance. Reaction to fire involves the combustible materials, which are likely to release heat when they decompose as a result of the temperature and in the presence of an oxidizer (most often the oxygen present in the air). Fire resistance considers an element’s ability to maintain its load-bearing capacity, thermal insulation and smoke and gas tightness properties during a fire. Since wood is a combustible material used as a structural element in buildings, it is considered in light of both of these aspects, which rely on specific standards and a variety tests.

When it comes to fighting fire, there are two strategies which are not mutually exclusive. The first calls for using what are referred to as active systems in the event of a fire: extinguishers, smoke detectors or automatic sprinkler systems. The second consists in using materials that will contribute as little as possible to the propagation of the fire.

Fireproofing

Since many materials, including most plastics and wood, are naturally highly flammable, additives called flame retardants must be incorporated within or on the surface of the flammable material. These flame retardants make it possible to modify the material’s behavior by disrupting the fire triangle.

Their effects are mainly to delay the appearance of flames, slow down flame propagation speed, reduce the heat released and the power of the fire, and limit the opacity and toxicity of the smoke.  All these effects are assessed through standardized reaction to fire tests. They result in classifications that determine the potential use of a material for a given application according to regulations. There is no universal flame retardant. A fireproofing system must be tailored to the material it is intended to protect, in particular by taking into consideration its decomposition process. Furthermore, the choice of a flame retardant is also informed by the process used to manufacture the material and must not have a significant effect on its intended functional properties.

Archeologists place the beginnings of fireproofing in antiquity. Around 400 BC, the Egyptians used minerals to make certain fabrics like cotton or linen fire-resistant. Later,  during the siege of Piraeus (23 BC), alum solutions were used to make the wooden ramparts fire resistant. Yet it was not until 18 June 1735 that Englishman Obadiah Wyld would file the first patent, patent number 551, for a cotton treatment. In the 19th century, the king of France, Louis XVIII, requested that a solution be found to prevent fires in Parisian theaters which were lit with candles.  Joseph Louis Gay-Lussac filed a patent for the use of a mixture of ammonium phosphate, ammonium chloride and borax to fireproof the curtains in theaters.

Flame retardants

There are several families of flame retardants, which are based on different chemical elements and work in various ways. Historically, halogenated molecules containing chlorine or bromine have been widely used since they are effective even in small quantities. These molecules act by disrupting the combustion reactions that take place within flames, which makes it easier to extinguish them and limits the amount of energy released. This is referred to as flame inhibition. However, the toxic nature of certain halogen compounds has led to a ban on their use. Since it is impossible during recycling to easily distinguish between authorized and prohibited brominated molecules, it is no longer possible to recycle plastics treated with these flame retardants. Moreover, these molecules lead to the formation of opaque, corrosive smoke in the event of a fire. For all these reasons, this family of fire retardants has increasingly come under scrutiny.

It has been replaced largely by phosphorous flame retardants. Given the wide variety of such retardants , they are able to act in a number of ways. But the main mode of action remains facilitating the formation of a residual layer on the surface of the combustible material, protecting the healthy part of the material. The strategy consists in disrupting pyrolysis reactions (decomposition of the material due to the heat) and facilitating the formation of a thermally stable residue rich in carbon, called “char.” Particularly effective systems are called intumescent because the char forms an expanded, insulating, very protective layer. This type of intumescent system is used in protective coatings for metallic components or wood.

We can also mention metallic hydroxides, which are inexpensive but proportionally less effective, meaning they must be incorporated at higher levels (up to 65% in mass in outer sheathes for wires) to produce a significant effect. As the result of the temperature, these particles release water in the form of vapor through endothermic decomposition, thereby contributing to cooling off the material and diluting the fuel in the flame.

There are also other chemistries, based on nitrogen (melamine), boron (zinc borate) or tin (hydroxystannate) for example. Nanotechnologies have also been used in the field of fireproofing for the past fifteen years. Lamelar clay or carbon nanotube type nanoparticles improve the insulating properties of the char formed, even at low levels.  But on their own, they are insufficient for providing overall protection of the material.

And wood?

In general, materials of organic origin (derived from biological organisms) such as oil, wood and coal   all have a composition rich in carbon and hydrogen atoms, which are likely to be oxidized. They are therefore combustible. Wood is a material with a complex structure and an elemental chemical composition made up of carbon (50%), oxygen (44%), and a small amount of hydrogen (6%).

Wood is a low-density material and possesses a natural ability to char, meaning a protective layer of char is formed between healthy wood and flames. When wood is burned, it first loses water, and becomes completely dry at 120 °C. Then, its structure gradually decomposes as the temperature rises. Its components remain relatively stable up to 250 °C, the temperature at which wood starts giving off smoke. At 320 °C, there is enough gas to ignite wood. Pyrolysis takes place mainly up to 500 °C,  after which point only charcoal (char) remains, which can slowly decompose through oxidation. While the char layer slows down the pyrolysis of the underlying healthy wood, its mechanical strength is negligible. As pyrolysis continues, the useful section of a wooden structural element is therefore reduced along with its load-bearing capacity.

The flame retardants used to fireproof wood belong to the families listed above (phosphorous, boron, nitrogen, metallic hydroxides). However, unlike with plastics, it is not possible to incorporate these additives when wood is manufactured.  Fireproofing therefore takes one of two forms: applying a surface coating (paint, varnish) or impregnating the core of the wood, meaning the hollow part – called the lumen – of wood cells through an autoclave process. The process involves filling all the lumens by degassing under vacuum then forcing the flame retardant to penetrate the wood by subjecting it to high pressure. This more complex solution makes it possible to prevent a diminution of the material’s fire resistant property in the event of surface defects.  When a coating deteriorates, it can no longer play its role as a flame retardant, leaving the wood without protection in the event of a fire.

[divider style=”dotted” top=”20″ bottom=”20″]

This article was co-authored by Clément Lacoste (IMT Mines Alès), Laurent Ferry (IMT Mines Alès) and Henri Vahabi (Université de Lorraine).The Conversation

Rodolphe Sonnier, Mines Alès – Institut Mines-Télécom

The original version of this article (in French) was published on The Conversation and republished under a Creative Commons license. Read the original article.

AI Artificial Intelligence

Far from fantasy: the AI technologies which really affect us

Editorial

“Artificial Intelligence”. It’s hard to define a technology which encompasses such a large variety of tools and techniques (centralized or decentralized approaches, supervised or unsupervised learning, ontologies, etc.), with ramifications in each of these categories, ranging from neural networks to autonomous agents. The scope of AI is both broad and rich. It would therefore be a shame to allow powerful economic players to sum it up as a simple household gadget that may well be artificial, but is of questionable intelligence.

To delve into the complexity of AI is to understand the mechanisms that underlie certain products and services that we use. It also endows us with the intellectual weapons to confront the alarmist visions of a future of humanity against machines, and to remain prudent in the face of over-enthusiastic technological solutionism.

It is with this in mind that we’re publishing this dossier to go with the IMT Scientific Symposium on artificial intelligence, held on April 4. It does not seek to draw an exhaustive portrait of AI, which is still impossible. Its aim is rather to make the reader aware of the applications of AI, and the ways in which it affects us directly, as citizens and consumers.

It therefore presents four examples of research work into smart homes, the customer journey in supermarkets, flood predictions, and synchronization with machines. These concrete insights show what artificial intelligence, with all its opportunities and limits, really represents.

[one_half]

[/one_half][one_half_last]

[/one_half_last]

[one_half]

[/one_half][one_half_last]

[/one_half_last]

To go further on the theme of artificial intelligence and its impact on the citizen-consumer, I’MTech offers a selection of our archives on the subject:

 

flood water level

How can AI help better forecast high and low water levels?

Predicting the level of water in rivers or streams can prove to be invaluable in areas with a high risk of flooding or droughts. While traditional models are based primarily on hypotheses about processes, another approach is emerging: artificial neural networks. Anne Johannet, an environmental engineering researcher at IMT Mines Alès, uses this approach.

This article is part of our dossier “Far from fantasy: the AI technologies which really affect us.

In hydrology, there are two especially important notions: high and low water levels. The first describes a period in which the flow of a watercourse is especially high, while the second refers to a significantly low flow. These variations in water level can have serious consequences. A high water level can, for example, lead to flooding (although this is not systematic), while a low water level can lead to restrictions on water abstraction, in particular for agriculture, and can harm aquatic ecosystems.

Based on past experience, it is possible to anticipate which watercourses tend to rise to a certain level in the event of heavy precipitation. This approach may obtain satisfactory results but clearly  lacks precision. This is why Flood Forecasting Services (SPC) also rely on one of two types of models. The first is called “reservoir modeling”: it treats a drainage basin like a reservoir, which overflows when the water content exceeds its filling capacity. But forecasts made based on this type of model may contain major errors, since they do not usually take into account soil heterogeneity or variability of drainage basin use.

The other approach is based on a physical model. The behavior of a studied watercourse is simulated using differential equations and field measurements. This type of model is therefore meant to take all the data into account in order to provide reliable predictions. However, it reaches its limits when faced with high variability, as is often the case: how land reacts to precipitation may depend on human activity, type of agriculture, seasons, existing vegetation etc. As a result, “it is very difficult to determine the initial state of a watercourse,” says Anne Johannet, an environmental engineering researcher at IMT Mines Alès. “It is the major unknown variable in hydrology, along with the unpredictability of rainfall.”  Therefore, the reality may ultimately conflict with forecasts, as was the case with the exceptional rising of the Seine in 2016. Moreover, certain drainage basins are little addressed by physical models due to their complexity. The Cévannes is one such example.

Neural networks learn independently

Anne Johannet’s research focuses on another approach which offers a new method for forecasting water flow: artificial intelligence. “The benefit of neural networks is that they can learn a function from examples, even if we don’t know this function”, explains the researcher.

Neural networks learn in a similar way as children do. They start out with little information and study a set of initial data, by calculating the output in a random way and inevitably making mistakes. Then, numerical analysis methods make it possible to gradually improve the model in order to reduce these errors.  In concrete terms, in hydrology, the objective of neural networks is to forecast the flow a watercourse or its water level based on rainfall. A dataset describing all the past observations about the basin is therefore used to train the model. While it is learning, the neural network calculates a flow based on precipitation, and this result is compared to real measurements. This process is then repeated several times, to correct its mistakes.

A pitfall to be avoided with this approach is that of “overlearning.”  If a neural network is “overtrained,” it can eventually lose its extrapolation quality and settle for knowing something “by heart.” To give an example, if the neural network integrates the appearance of a major rise in water level on 15 November 2002, overlearning can lead it deduce that such an event will occur every year on 15 November. To avoid this phenomenon, the dataset used to train the network is divided into two subsets: one for learning and one for validation. And as the errors are corrected on the training dataset its ability to generalize is verified using the test dataset.

The main benefit of this neural network approach is that it requires much less input data. A physical model requires a large amount of data, about the nature of the land, vegetation, slope etc. A neural network, on the other hand, “only needs the rainfall and flow at a location we’re interested in, which facilitates its implementation,”  says Anne Johannet. This leads to lower costs and provides quicker results. However, the success of such an approach relies heavily on rainfall predictions, which are used as input variables. And this precipitation remains difficult to forecast.

Clear advantages but a controversial approach

Today, Anne Johannet’s models are already used by public services including the Artois-Picardie Flood Prevention Service (in the Hauts-de-France region). Based on rainfall prediction, agents establish scenarios and study the consequences using neural networks. Depending on the type of basin — which may react more or less quickly — they are also able to make forecasts several hours or even a day in advance for high water levels, and several weeks in advance for low water levels.

This data can therefore have a direct effect on local authorities and citizens.  For example, predicting a significant low water period could lead water supply management to switch to an alternative water source, or could lead the authorities to prohibit water abstraction.  Predicting  a high water period, on the other hand, could help anticipate potential flooding, based on the land structure.

Longer-term projections can also be established using data from the IPCC (Intergovernmental Panel on Climate Change). Trial forecasts of the flow of the Albarine river in the Ain department have been carried out, up to 2070, to assess the impact of global warming. The results indicate severe low-water levels in the future, which could affect land-use planning and farming activity.

However, despite these results, the artificial intelligence approach for predicting high and low water levels has been met with mistrust by many hydrologists, especially in France. They argue that these systems are incapable of generalizing due to climate change, and largely prefer physical or reservoir models. The IMT Mines Alès researcher rejects these accusations, underscoring the rigorous validation of neural networks. She suggests that the results from the different methods should be viewed alongside one another, evoking the words of statistician George Box: “All models are wrong, but some are useful.”

Article written for I’MTech by Bastien Contreras