Thermiup

ThermiUp: a new heat recovery device

ThermiUP helps meet the challenge of energy-saving in buildings. This start-up, incubated at IMT Atlantique, is set to market a device that transfers heat from grey water to fresh water. Its director, Philippe Barbry, gives us an overview of the system.

What challenges does the start-up ThermiUp help meet?

Philippe Barbry: Saving energy is an important challenge from a societal point of view, but also in terms of regulations. In the building industry, there are increasingly strict thermal regulations. The previous regulations were established in 2012, while the next ones will come into effect in 2022 and will include CO2 emissions related to energy consumption. New buildings must meet current regulations. Our device reduces energy needs for heating domestic water, and therefore helps real estate developers and social housing authorities comply with regulations.

What is the principle behind ThermiUP?

PB: It’s a device that exchanges energy between grey water, meaning little-polluted waste water from domestic use, and fresh water. The exchanger is placed as close as possible to the domestic water outlet so that this water loses a minimum of heat energy. The exchanger connects the water outlet pipe with that of the fresh water supply.

On average, water from a shower is at 37°C and cools down slightly at the outlet: it is around 32°C when it arrives in our device. Cold water is at 14°C on average. Our exchanger preheats it to 25°C. Showers represent approximately 80% of the demand for domestic hot water and the exchanger makes it possible to save a third of the energy required for the total domestic hot water production.

Is grey water heat recovery an important energy issue in the building sector?

PB: Historically, most efforts have focused on heating and insulation for buildings. But great strides have been made in this sector and these aspects now account for only 30% of energy consumption in new housing units. As a result, domestic hot water now accounts for 50% of these buildings’ energy consumption.  

What is the device’s life expectancy?

PB: That’s one of the advantages of our exchanger: its life expectancy is equivalent to that of a building, which is considered to be 50 years. It’s a passive system, which doesn’t require electronics,  moving parts or a motor. It is based simply on the laws of gravity and energy transformation. It can’t break down, which represents a significant advantage for real estate developers. ThermiUP reduces energy demand and can also be compatible with other systems such as solar.  

How does your exchanger work?

PB: It is not a traditional heat plate exchanger, since that would get dirty too quickly. Our research and development was based on other types of exchangers. It is a device made of copper, which is an easily recycled material. We optimized the prototype for exchange and its geometry along with its industrial manufacturing technique for two years at IMT Atlantique. But I can’t say more about that until it becomes available on the market in the next few months.

Do you plan to implement this device in other types of housing than new buildings?

PB: For now, with our device, we only plan to target the new building market which is a big market since there are approximately 250,000 multiple dwelling housing units a year in France. In the future, we’ll work on prototypes for individual houses as well as for the renovation sector.

Learn more about ThermiUp

By Antonin Counillon

Graphene, or the expected revolution in electronics: coming soon

Thibaut LalireIMT Mines Alès – Institut Mines-Télécom

“Material of the 21st century,” a “revolutionary material”: these are some of the ways graphene has been described since it was discovered in 2004 by Konstantin Novoselov and Andre Geim. The two scientists’ research on graphene won them the Nobel Prize in Physics in 2010. But how do things stand today – seventeen years after its discovery?

Graphene is known worldwide for its remarkable properties, whether mechanical, thermal or electrical. Its perfect honeycomb structure composed of carbon atoms is the reason why graphene is a high-performance material that can be used in numerous fields. Its morphology, in the form of a sheet just one atom thick, makes it part of the family of 2D materials. Manufacturers have stepped up research on this material since its discovery, and a wide range of applications have been developed, in particular by taking advantage of graphene’s electrical performance. Many sectors are targeted, such as aeronautics, the automotive industry and telecommunications.

Is there graphene in airplanes?

Graphene is used for its status as a champion of electrical conductivity, as well as for its low density and its flexibility. These properties allow it to join the highly exclusive club of materials used in  aeronautics.

Lightning and ice buildup are problems frequently encountered by airplanes at high altitudes. The impact of a lightning strike on a non-conductive surface causes severe damage that can even include the aircraft catching fire. The addition of graphene, with its high electrical conductivity, makes it possible to dissipate this high-energy current. Airplanes are designed in such a way so as to route the current as far as possible from risk areas – fuel tanks and control cables – and therefore prevent loss of control of the aircraft, or even explosion.

L’attribut alt de cette image est vide, son nom de fichier est file-20210531-15-x3uhcl.jpg.
The history of graphene starts here. Umberto/UnsplashCC BY

A coating composed of a resin reinforced with graphene, which is referred to as a “nanocomposite,” is used as an alternative to metal coating, since its low density makes it possible to obtain lighter materials than the original ones – limiting the aircraft’s mass, and therefore, its fuel consumption. But the electrically conductive materials required to dissipate the energy of the lightening strike have the drawback of reflecting electromagnetic waves, meaning that this kind of material cannot be used for stealth military applications.

To overcome this shortcoming, different forms of graphene have been developed to conserve its electrical conductivity while improving stealth. “Graphene foam” is one of these new structures. The wave penetrates the material, which creates a phenomenon in which the wave is reflected in all directions, trapping it and gradually suppressing its traces. It is not possible for the wave to return to the radar, so the aircraft becomes stealth. This is referred to as electromagnetic shielding.

Graphene for energy storage

Graphene has also become widely used in the field of electrical energy storage.

Graphene is an ideal candidate as an electrode for Li-ion batteries and supercapacitators. Its high electrical conductivity and high specific surface area (corresponding to the available surface on the graphene that can accommodate ions and facilitate the exchange of electrons between the graphene electrode and the lithium) makes it possible to obtain a large “storage capacity.” A large number of ions can easily insert themselves between the graphene sheets, which allows electrons to be exchanged with the current, increasing the battery’s electricity storage capacity, and therefore battery life. The ease with which ions can insert themselves into the graphene electrode and the high electrical conductivity of this material (for rapid electron transfer) result in a battery with a much shorter discharge/charge cycle. Graphene’s high conductivity makes it possible to deliver a great quantity of energy in a very short time, resulting in more powerful supercapacitators. Graphene is also a good thermal conductor, which limits temperature rise in batteries by dissipating the heat.

L’attribut alt de cette image est vide, son nom de fichier est file-20210531-19-yn42ze.jpg.
Electric batteries are increasingly pervasive in our lives. Graphene could help improve their performance. Markus Spiske/UnsplashCC BY

At the industry level, Real Graphene has already developed an external battery that can completely recharge a mobile phone in 17 minutes. In an entirely different industry, Mercedes is working on a  prototype for a car with a battery composed of graphene electrodes, proclaimed to have a range of 700 kilometers for a 15-minute recharge  – at present, these values are quite surprising at first glance, especially for electric vehicles which require batteries with high storage capacity.

Making its way into the field of electronics

One area where graphene has struggled to set itself apart compared to semi-conductors is the field of electronics. Its electronic properties – due to its “band structure” – make it impossible to control electrons and graphene therefore behaves like a semi-metal. This means that the use of graphene for binary  – digital – electronics remains challenging, especially for transistors, which are instead composed of semi-conductors.

In order for graphene to be used in transistors, its band structure must be modified, which usually means degrading its honeycomb structure and other electrical properties. If we want to conserve this 2D structure, the chemical nature of the atoms that make up the material must be modified, for example by using boron nitride or transition metal dichalcogenides, which are also part of the family of 2D materials.

L’attribut alt de cette image est vide, son nom de fichier est file-20210531-16-mbz3hw.png.
Microscopy of the interface between graphene and boron nitride(h-BN). Oak Ridge National Laboratory, FlickrCC BY

If, however, we wish to use graphene, we must target applications in which mechanical properties (flexibility) are also sought, such as for sensors, electrodes and certain transistors reserved for analog electronics, like graphene field-effect transistors. The leading mobile phone companies are also working on developing flexible mobile phone screens for better ergonomics.

The manufacturing of the coming quantum computers may well rely on materials known as “topological insulators.” These are materials that are electrical conductors on their surface, but insulators at their core. Research is now focusing on the topological phase of graphene with electric conduction only at the edges.  

The wide variety of applications for graphene demonstrates the material’s vast potential and makes it possible to explore new horizons in a wide range of fields such as optoelectronics and spintronics.

This material has already proved itself in industry, but has not revolutionized it so far. However, ongoing research allows new fields of application to be discovered every year. At the same time, synthesis methods are continually being developed to reduce the price of graphene per kilogram and obtain a higher-quality material.

Thibaut Lalire, PhD student in material science, IMT Mines Alès – Institut -Télécom

This article has been republished from The Conversation under a Creative Commons license. Read the original article (in French).

Gouvernance des données

Data governance: trust it (or not?)

The original version of this article (in French) was published in the quarterly newsletter no. 20 (March 2021) of the Values and Policies of Personal Information (VP-IP) Chair.

On 25 November 2020, the European Commission published its proposal for the European data governance regulation, the Data Governance Act (DGA) which aims to “unlock the economic and societal potential of data and technologies like artificial intelligence “. The proposed measures seek to facilitate access to and use of an ever-increasing volume of data. As such, the text seeks to contribute to the movement of data between member states of the European Union (as well as with States located outside the EU) by promoting the development of “trustworthy” systems for sharing data within and across sectors.

Part of a European strategy for data

This proposal is the first of a set of measures announced as part of the European strategy for data presented by the European Commission in February 2020. It is intended to dovetail with two other proposed regulations dated on 15 December 2020: the Digital Services Act (which aims to regulate the provision of online services, while maintaining the principle of the prohibition of a surveillance obligation) and the Digital Market Act (which organizes the fight against unfair practices by big platforms against companies who offer services through their platforms). A legislative proposal for the European Health Data Space is expected for the end of 2021 and possibly a “data law.”

The European Commission also plans to create nine shared European data spaces in strategic economic sectors and public interest areas, from the manufacturing industry to energy, or mobility, health, financial data and green deal data. The first challenge to overcome in this new data ecosystem will be to transcend national self-interests and those of the market.  

The Data Governance Act proposal does not therefore regulate online services, content or market access conditions: it organizes “data governance,” meaning the conditions for sharing data, with the market implicitly presumed to be the paradigm for sharing. This is shown in particular by an analysis carried out through the lens of trust (which could be confirmed in many other ways).

The central role of trust

Trust plays a central and strategic role in all of this legislation since the DGA “aims to foster the availability of data for use, by increasing trust in data intermediaries and by strengthening data-sharing mechanisms across the EU.” “Increasing trust”, “building trust”, ensuring a “higher level of trust”, “creating trust”, “taking advantage of a trustworthy environment”, “bringing trust” – these expressions appearing throughout the text point to its fundamental aim.

However, despite the fact that the proposal takes great care to define the essential terms on which it is based (“data“, “reuse”, “non-personal data”, “data holder”, “data user”, “data altruism” etc.), the term “trust,” along with the conditions for ensuring it, are exempt from such semantic clarification – even though “trust” is mentioned some fifteen times.

As in the past with the concept of dignity, which was part of the sweeping declarations of rights and freedoms in the aftermath of the Second World War but was nevertheless undefined –  despite the fact that it is the cornerstone of all bioethical texts, the concept of trust is never made explicit. Lawmakers, and those to whom the obligations established by the legal texts are addressed, are expected to know enough about what dignity and trust are to implicitly share the same understanding. As with the notion of time for Saint Augustine, everyone is supposed to understand what it is, even though they are unable to explain it to someone else.

While some see this as allowing for a certain degree of “flexibility” to adapt the concept of trust to a wide range of situations and a changing society, like the notion of privacy, others see this vagueness – whether intentional or not – at best, as a lack of necessary precision, and at worst, as an undeclared intention.

The implicit understanding of trust

In absolute terms, it is not very difficult to understand the concept of trust underlying the DGA (like in the Digital Services Act in which the European Commission proposes, among other things, a new mysterious category of “trusted flaggers“). To make it explicit, the main objectives of the text must simply be examined more closely.

The DGA represents an essential step for open data. The aim is clearly stated: to set out the conditions for the development of the digital economy by creating a single data market. The goal therefore focuses on introducing a fifth freedom: the free movement of data, after the free movement of goods, services, capital and people.  

While the GDPR created a framework for personal data protection, the DGA proposal intends to facilitate its exchange, in compliance with all the rules set out by the GDPR (in particular data subjects’ rights and consent when appropriate).

The scope of the proposal is broad.

The term data is used to refer to both personal data and non-personal data, whether generated by public bodies, companies or citizens. As a result, interaction with the personal data legislation is particularly significant. Moreover, the DGA proposal is guided by principles for data management and re-use that were developed for research data. The “FAIR” principles for data stipulate that this data must be easy to find, accessible, interoperable and re-usable, while providing for exceptions that are not listed and unspecified at this time.

To ensure trust in the sharing of this data, the category of “data intermediary” is created, which is the precise focus of all the political and legal discourse on trust. In the new “data spaces” which will be created (meaning beyond those designated by the European Commission), data sharing service providers will play a strategic role, since they are the ones who will ensure interconnections between data holders/producers and data users.

The “trust” which the text seeks to increase works on three levels:

  1. Trust among data producers (companies, public bodies data subjects) to share their data
  2.  Trust among data users regarding the quality of this data
  3. Trust among trustworthy intermediaries in the various data spaces

Data intermediaries

This latter group emerges as organizers for data exchange between companies (B2B) or between individuals and companies (C2B). They are the facilitators of the single data market. Without them, it is not possible to create it from a technical viewpoint or make it work. This intermediary position allows them to have access to the data they make available; it must be ensured that they are impartial.

The DGA proposal differentiates between two types of intermediaries: “data sharing service providers,” meaning those who work “against remuneration in any form”  with regard to both personal and non-personal data (Chapter III) and “data altruism organisations” who act “without seeking a reward…for purposes of general interest such as scientific research or improving public services” (Chapter VI).

For the first category, the traditional principle of neutrality is applied.

To ensure this neutrality, which “is a key element to bring trust, it is therefore necessary that data sharing service providers act only as intermediaries in the transactions, and do not use the data exchanged for any other purpose”. This is why data sharing services must be set up as legal entities that are separate from other activities carried out by the service provider in order to avoid conflicts of interest. In the division of digital labor, intermediation becomes a specialization in its own right. To create a single market, we fragment the technical bodies that make it possible, and establish a legal framework for their activities.

In this light, the real meaning of “trust” is “security” – security for data storage and transmission, nothing more, nothing less. Personal data security is ensured by the GDPR; the security of the market here relates to that of the intermediaries (meaning their trustworthiness, which must be legally guaranteed) and the transactions they oversee, which embody the effective functioning of the market.

From the perspective of a philosophical theory of trust, all of the provisions outlined in the DGA are therefore meant to act on the motivation of the various stakeholders, so that they feel a high enough level of trust to share data. The hope is that a secure legal and technical environment will allow them to transition from simply trusting in an abstract way to having trust in data sharing in a concrete, unequivocal way.

It should be noted, however, that when there is a conflict of values between economic or entrepreneurial freedom and the obligations intended to create conditions of trust, the market wins. 

In the impact assessment carried out for the DA proposal, the Commission declared that it would choose neither a high-intensity regulatory intervention option (compulsory certification for sharing services or compulsory authorization for altruism organizations), nor a low-intensity regulatory intervention option (optional labeling for sharing services or voluntary certification for altruism organizations). It opted instead for a solution it describes as “alternative” but which is in reality very low-intensity (lower even, for example, than optional labeling in terms of guarantees of trust). In the end, a notification obligation with ex post monitoring of compliance for sharing services was chosen, along with the simple possibility of registering as an “organisation engaging in data altruism.”

It is rather surprising that the strategic option selected includes so few safeguards to ensure the security and trust championed so frequently by the European Commission champion in its official communication.

An intention based on European “values”

Margrethe Vestager, Executive Vice President of the European Commission strongly affirmed this: “We want to give business and citizens the tools to stay in control of data. And to build trust that data is handled in line with European values and fundamental rights.”

But in reality, the text’s entire reasoning shows that the values underlying the DGA are ultimately those of the market – a market that admittedly respects fundamental European values, but that must entirely shape the European data governance model. This offers a position to take on the data processing business model used by the major tech platforms. These platforms, whether developed in the Silicon Valley ecosystem or another part of the world with a desire to dominate, have continued to gain disproportionate power in light of their business model. Their modus operandi is inherently based on the continuous extraction and complete control of staggering quantities of data.

The text is thus based on a set of implicit reductions that are presented as indisputable policy choices. The guiding principle, trust, is equated with security, meaning security of transactions. Likewise, the European values as upheld in Article 2 of the Treaty on European Union, which do not mention the market, are implicitly related to those that make the market work. Lastly, governance, a term that has a strong democratic basis in principle, which gives the DGA its title, is equated only with the principles of fair market-based sharing, with the purported aim, among other things, to feed the insatiable appetite of “artificial intelligence”.

As for “data altruism,” it is addressed in terms of savings in transaction costs (in this case, costs related to obtaining consent), and the fact that altruism can be carried out “without asking for remuneration” does not change the market paradigm: a market exchange is a market exchange, even when it’s free.

By choosing a particular model of governance implicitly presented as self-evident, the Commission  fails to recognize other possible models that could be adopted to oversee the movement of data.  Just a few examples that could be explored and which highlight the many overlooked aspects of the text, are:

  1.  The creation of a public European public data service
  2. Interconnecting the public services of each European state (based on the eIDAS or Schengen Information System (SIS) model; see also France’s public data service, which presently applies to data created as part of public services by public bodies)
  3. An alternative to a public service: public officials, like notaries or bailiffs, acting under powers delegated by a level of public authority
  4. A market-based alternative: pooling of private and/or public data, initiated and built by private companies.

What kind of data governance for what kind of society?

This text, however, highlights an interesting concept in the age of the “reign of data”: sharing. While data is trivially understood as being the black gold of the 21st century, the comparison overlooks an unprecedented and essential aspect: unlike water, oil or rare metals, which are finite resources, data is an infinite resource, constantly being created and ever-expanding.

How should data be pooled in order to be shared?

Should data from the public sector be made available in order to transfer its value creation to the private sector? Or should public and private data be pooled to move toward a new sharing equation? Will we see the emergence of hybrid systems of values that are evenly distributed or a pooling of values by individuals and companies? Will we see the appearance of a “private data commons”? And what control mechanisms will it include?

Will individuals or companies be motivated to share their data? This would call for quite a radical change in economic culture.

The stakes clearly transcend the simple technical and legal questions of data governance. Since the conditions are those of an infinite production of data, these questions make us rethink the traditional economic model.

It is truly a new model of society that must be discussed. Sharing and trust are good candidates for rethinking the society to come, as long as they are not reduced solely to a market rationale.

The text, in its current form, certainly offers points to consider, taking into account our changing societies and digital practices. The terms, however, while attesting to worthwhile efforts for categorization adapted to these practices, require further attention and conceptual and operational precision.   

While there is undoubtedly a risk of systematic commodification of data, including personal data, despite the manifest wish for sharing, it must also be recognized that the text includes possible advances.  The terms of this collaborative writing  are up to us – provided, of course, that all of the stakeholders are consulted, including citizens, subjects and producers of this data.


Claire Levallois-Barth, lecturer in Law at Télécom Paris, coordinator of the VP-IP chair, co-founder of the VP-IP chair.

Mark Hunyadi, professor of moral and political philosophy at the Catholic University of Louvain (Belgium), member of the VP-IP chair.

Ivan Meseguer, European Affairs, Institut Mines-Télécom, co-founder of the VP-IP chair.

bio-inspiration

What is bio-inspiration?

The idea of using nature as inspiration to create different types of technology has always existed, but it has been formalized through a more systematic approach since the 1990s. Frédéric Boyer, a researcher at IMT Atlantique, explains how bio-inspiration can be a source of new ideas for developing technologies and concepts, especially for robotics.

How long has bio-inspiration been around?

Frédéric Boyer: There have always been exchanges between nature and fundamental and engineering sciences. For instance, Alessandro Volta used electric fish such as electric rays as inspiration to develop the first batteries. But it’s an approach that has become more systematic and intentional since the 1990s.

How does bio-inspiration contribute to the development of new technologies?

FB: In the field of robotics, the dream has always been to make an autonomous robot, that can interact appropriately with an unfamiliar environment, without putting itself or those around it in danger. In robotics we don’t really talk about intelligence – what we’re interested in is autonomy, and that’s still a long way off.  

There’s a real paradigm shift underway. For a long time, intelligence was considered to be  computing power. Using measurements made by their various sensors, robots had to reconstruct their complex environment in a symbolic way and make decisions based on this information. Through this approach, we built machines that were extremely complex but with little autonomy or ability to adapt to different environments. Through the bio-inspiration movement, in particular, intelligence has also come to be defined in terms of the autonomy it brings to a system.  

What is the principle behind a bio-inspired robot?

FB: Bio-inspired robots are not based on the perception and complex representation of their environment. They are simply sensors and local loops that enable them to move in different environments. This type of intelligence comes from observing animals’ bodies: it’s what we call embodied intelligence. This intelligence, encoded in the body and morphology of living organisms has been developed over the course of evolution: an animal with a very simple nervous system can interact very effectively with its environment. We practice embodied intelligence every day: with very low  levels of cognition we solve complex problems related to the autonomy of our body.

To illustrate the difference between the paradigms we can take the example of a robot that creeps along like a snake. There are two approaches for piloting this system. The first is to place a supercomputer inside the robot, that will send a signal to each of the vertebrae and motors to drive the joints. With the second approach, there is no centralized computer. The “head” sends an  impulsion to the first vertebrae which spreads to the next one, followed by the one after that and is automatically synchronized through feedback phenomena between the vertebrae and with the environment, through sensors.

Read more on l’IMTech: Intelligence incarnated: a bio-inspired approach in the field of robotics

What does a bio-inspired approach to developing new technology involve?

FB: It’s an approach made up of three stages. First, you have to closely observe how living beings function and choose a good candidate solution for a given problem. Then, you have to extract relevant phenomena in the functioning of the original natural system and understand them using the laws of physics and mathematical models. This is the most complex stage because we have to do a lot of sorting. Nature is extremely redundant, which allows it to adapt to changes in the environment, but in our case, only certain phenomena are sought. So, we have to sort through the information and order it using mathematical models to understand how animals solve problems. The last stage is using these models to develop new technologies. That’s also why we talk about bio-inspiration and not bio-mimicry: the goal is not to reproduce living systems, but to use them as inspiration based on functional models.

What are some examples of bio-inspired technologies?

FB : We’re working on the electrical sense, inspired by fish that are referred to as electric fish: these animals’ electro sensitive skin helps them perceive their environment (objects, distance, other fish etc.) and navigate over long distances using maps that are still little understood. We are able to imitate this sixth sense using electrodes placed on the surface of our robots that record “echoes” from a field, emitted and reflected by the environment.

Beyond that, one of the best-known examples are the crawling robots developed by the Massachusetts Institute of Technology (MIT). These robots are inspired by geckos. They can stick to surfaces through a multitude of microscopic adhesive forces, like those generated by the tiny hairs on geckos’ feet. The bio-inspired approach can be extended to these nanometer scales!

And insects’ vision and the way they flap their wings to fly, or how snakes creep along are other examples of sources of inspiration for developing robotic technologies.  

Are there other kinds of intelligence that are inspired by animals?

FB : Collective intelligence is a good example. By studying ants or bees, it is possible to make swarms of drones that can perform complex cognitive tasks without a high level of on-board intelligence. For animals that are organized in swarms, each unit has very little intelligence, but the sum of their interactions, with one another and with the environment, results in a collective intelligence. It’s also a source of study for the development of new robotic technologies.

What fields does bio-inspiration apply to?

FB: In addition to robotics, bio-inspiration provides a source of innovation for a wide range of fields. Whether in applied sciences, like aeronautics and architecture, or areas of basic research like mathematics, physics and computational sciences.

What does the future of bio-inspiration hold?

FB : We’re going to have to reinvent and produce new technologies that are not harmful to the environment. There is a philosophical revolution and paradigm shift to be achieved in terms of the relationship between man and other living things. It would be a mistake to believe that we can replace living beings with robots.

Bio-inspiration teaches us a form of wisdom and humility, because we still have a lot of work ahead of us before we can build a drone that can fly like an insect in an autonomous way, in terms of energy and decision-making. Nature is a never-ending source of inspiration, and of wonder.

By Antonin Counillon

fonds industrie

Eclore and ThermiUp, new beneficiaries of the IMT “Industry & Energy 4.0” honor loans

After the IMT Digital Fund, Institut Mines-Télécom (IMT) and the Fondation Mines-Télécom launched a second fund last October, dedicated to the sciences of energy, materials and processes: “Industry & Energy 4.0”. Its committee, made up of experts from the major partners of the Fondation Mines-Télécom (Orange, BNP Paribas, Accenture, Airbus, Dassault Systèmes and Sopra Steria) met on March 18. Eclore and ThermiUp were granted honor loans for a total amount of €80,000. They are both incubated at IMT Atlantique.

L’attribut alt de cette image est vide, son nom de fichier est Logo_ECLORE-300x119-1.jpg.

Eclore Actuators offers a bio-inspired pneumatic and hydraulic actuator solution which is highly energy efficient, 100% recyclable, and based on unique and patented industrial bending processes. Eclore actuators are less expensive, lighter, less bulky and require less maintenance than traditional actuators. There are many sectors of application, such as industrial automation, robotics, IOT and home appliances. Find out more

L’attribut alt de cette image est vide, son nom de fichier est Logo_thermiUP-300x188-1.jpeg.

ThermiUp has developed a heat exchanger that recovers heat from the gray water of buildings to preheat domestic water. It allows builders to save up to 1/3 of the energy needed to produce domestic hot water, which represents half of the energy needs in new housing. This renewable energy device reduces greenhouse gas emissions by 1/3. Find out more

Digital Service Act

Digital Service Act: Regulating the content of digital platforms, Act 1

The Digital Service Act, proposed by the European Commission in early 2020, seeks to implement a new regulatory framework for digital platforms. Grazia Cecere, an economics researcher at Institut Mines-Télécom Business School, explains various aspects of these regulations.

Why has it become necessary to regulate the content of platforms?

Grazia Cecere: Technological developments have changed the role of the internet and platforms. Previous regulations specified that publishers were responsible for the totality of their content, but that web hosts were only responsible if flagged content was not handled adequately. With the emergence of super platforms and social media, the role of web hosts has changed. Their algorithms lead to more specific distribution of content, through rankings, search engine optimization and highlighting content, which may have significant impacts and contain dangerous biases.

What kind of content must be better regulated by digital platforms?

GC: There are many issues addressed, in particular combating cyber-bullying, disinformation and fake news, as well different types of discrimination. Today the platforms’ algorithms self-regulate based on the available data and may reproduce and amplify discrimination that exists in society. For example, if data analyzed by the algorithm shows wage gaps between men and women, it is likely to build models based on this information. So it’s important to identify these kinds of biases and correct them. Discrimination not only poses ethical problems: it also has economic implications. For example, if an algorithm designed to propose a job profile is biased based on an individual’s gender or skin color, the only important criteria – professional ability – will be less clear.

Read more on l’IMTech: Social media: The everyday sexism of advertising algorithms

What does the Digital Service Act propose so that platforms regulate their content?

C: The Digital Service Act seeks to set clear rules for the responsibilities that come with digital platforms. They must monitor the information distributed on their platforms, especially fake news and potentially harmful content. The goal is also to inform users better about the content and ensure their fundamental rights online. Platforms must also increase their transparency and make data about their activity available. This data would then be available to researchers who could test for whether it contains biases. The purpose of the Digital Service Act is to provide a harmonized legislative and regulatory system across all EU member states.

How can platforms regulate their own content?

GC : Another aspect of the Digital Service Act is providing the member states with regulatory instruments for their platforms. Different kinds of tools can be implemented. For example, a tool called “Fast Tracking” is being developed for Google to detect false information about Covid-19 automatically. This kind of tool, which determines whether information is false based on written content, can be complicated since it requires sophisticated natural language processing tools. Some issues are more complicated to regulate than others.

Are digital platforms starting to take into account the Digital Service Act?

GC: It depends on the platform. AirBnb and Uber, for example, have made a lot of data available to researchers so that they can determine what kinds of discriminatory biases it contains. And Google and Facebook are also providing access to an increasing amount of data. But Snapchat and TikTok are a whole other story!

Will the Digital Service Act also help regulate the internet market?

 GC: The previous regulation, the E-Commerce Directive, dates from 2000. Over time, it has become obsolete. Internet players today are different than they were 20 years ago and some have a lot more power. One of the challenges is for the internet market to remain open to everyone and for new companies to be able to be founded independently from the super platforms to boost competition, since today, any company that is founded depends on the monopoly of big tech companies.

By Antonin Counillon

air intérieur

Our indoor air is polluted, but new materials could provide solutions

Frédéric Thévenet, IMT Lille Douai – Institut Mines-Télécom

We spend 80% of our lives in enclosed spaces, whether at home, at work or in transit. We are therefore very exposed to this air, which is often more polluted than outdoor air. The issue of health in indoor environments is thus associated with chronic exposure to pollutants and to volatile organic compounds (VOCs) in particular. These species can cause respiratory tract irritation or headaches, a set of symptoms that is referred to as “sick building syndrome.” One VOC has received special attention: formaldehyde. This compound is a gas at room temperature and pressure and is very frequently present in our indoor environments although it is classified as a category 1B CMR compound (carcinogenic, mutagenic, reprotoxic). It is therefore subject to indoor air quality guidelines which were updated and made more restrictive in 2018.

The sources of volatile organic compounds

VOCs may be emitted in indoor areas by direct, or primary sources. Materials are often identified as major sources, whether associated with the building (building materials, pressed wood, wood flooring, ceiling tiles), furniture (furniture made from particle board, foams), or decoration (paint,  floor and wall coverings). The adhesives, resins and binders contained in these materials are clearly identified and well-documented sources.

To address this issue, mandatory labeling has existed for these products since 2012: they are classified in terms of emissions. While these primary sources related to the building and furniture are now well-documented, those related to household activities and consumer product choices are more difficult to characterize (cleaning activities, cooking, smoking etc.) For example, what products are used for cleaning, are air fresheners or interior fragrances used, are dwellings ventilated regularly? Research is being conducted in our laboratory to better characterize how these products contribute to indoor pollution. We have recently worked on cleaning product emissions and their elimination. And studies have also recently been carried out on the impact of essential oils at our laboratory (at IMT Lille Douai) in partnership with the CSTB (French National Scientific and Technical Center for Building) in coordination with ADEME (French Environmental and Energy Management Agency).

L’attribut alt de cette image est vide, son nom de fichier est file-20210520-13-13s7338.png.
Emission, deposition and reactivity of essential oils in indoor air (Shadia Angulo-Milhem, IMT Lille Douai). Author provided

In addition to the primary sources of VOCs, there are also secondary sources resulting from the transformation of primary VOCs. These transformations are usually related to oxidative processes. Through these reactions, other kinds of VOCs are also formed, including formaldehyde, among others.

What solutions are there for VOCs in indoor air?

Twenty years ago, an approach referred to as a “destructive process” was being considered. The idea was to pass the air to be treated through a purification system to destroy the VOCs. These can either be stand-alone devices and therefore placed directly inside a room to purify the air, or integrated within a central air handling unit to treat incoming fresh air or re-circulated air.

Photocatalysis was also widely studied to treat VOCs in indoor air, as well as cold plasma. Both of these processes target the oxidation of VOCs, ideally their transformation into CO2 and H2O. Photocatalysis is a process that draws on a material’s – usually titanium dioxide (TiO2) – ability to  adsorb and oxidize VOCs under ultraviolet irradiation. Cold plasma is a process where, under the effect of a high electric field, electrons ionize a fraction of the air circulating in the system, and form oxidizing species.

The technical limitations of these systems lie in the fact that the air to be treated must be directed and moved through the system, and most importantly, the treatment systems must be supplied with power. Moreover, depending on the device’s design and the nature of the effluent to be treated (nature of the VOC, concentration, moisture content etc.) it has been found that some devices may lead to the formation of by-products including formaldehyde, among others. Standards are currently available to oversee the assessment of this type of system’s performance and they are upgraded with technological advances.

Over the past ten years, indoor air remediation solutions have been developed focusing on the adsorption – meaning the trapping – of VOCs. The idea is to integrate materials with adsorbent properties in indoor environments to trap the VOCs. We have seen the emergence of materials, paint, tiles and textiles that incorporate adsorbents in their compositions and claim these properties.

Among these adsorbent materials, there are two types of approaches. Some trap the VOCs, and do not re-emit them – it’s a permanent, irreversible process. The “VOC” trap can therefore completely  fill up after some time and become inoperative, since it is saturated. Today, it seems wiser to develop materials with “reversible” trapping properties: when there is a peak in pollution, the material adsorbs the pollutant, and when the pollution decreases, for example, when a room is ventilated, it releases it, and the pollutant is evacuated through ventilation.

These materials are currently being developed by various academic and industry players working in this field. It is interesting to note that these materials were considered sources of pollution 20 years ago, but can now be viewed as sinks for pollution.

How to test these materials’ ability to remove pollutants

Many technical and scientific obstacles remain, regardless of the remediation strategy chosen. The biggest one is determining whether these new materials can be tested on a 1:1 scale, as they will be used by the end consumer, meaning in “real life.”  

That means these materials must be able to be tested in a life-size room, and with conditions that are representative of real indoor atmospheres, while controlling environmental parameters perfectly. This technical aspect is one of the major research challenges in IAQ since it determines the representativeness and therefore the validity of the results we obtain.  

L’attribut alt de cette image est vide, son nom de fichier est file-20210520-21-joiruv.png.
Experimental IRINA room (Innovative Room for Indoor Air studies, IMT Lille Douai). Author provided

We developed a large enclosed area in our laboratory for precisely this purpose a few years ago. With its 40 square meters, it is a real room that we can go into, called IRINA (Innovative Room For Indoor Air Studies). Seven years ago, it was France’s first fully controlled and instrumented experimental room on a 1:1 scale. Since its development and validation, it has housed many research projects and we upgrade it and make technical updates every year. It allows us to recreate the indoor air composition of a wood frame house, a Parisian apartment located above a ring road, an operating room and even a medium-haul aircraft cabin. The room makes it possible to effectively study indoor air quality and treatment devices in real-life conditions.

Connected to this room, we have a multitude of measuring instruments, for example to measure VOCs in general, or to monitor the concentration of one in particular, such as formaldehyde.

Frédéric Thévenet, Professor (heterogeneous/atmospheric/indoor air quality physical chemistry), IMT Lille Douai – Institut Mines-Télécom

This article has been republished from The Conversation under a Creative Commons license. Read the original article (in French).

IMPETUS: towards improved urban safety and security

How can traffic and public transport be managed more effectively in a city, while controlling pollution, ensuring the safety of users and at the same time, taking into account ethical issues related to the use of data and mechanisms to ensure its protection? This is the challenge facing IMPETUS, a €9.3 million project receiving funding of €7.9 million from the Horizon 2020 programme of the European Union[1]. The two-year project launched in September 2020 will develop a tool to increase cities’ resilience to security-related events in public areas. An interview with Gilles Dusserre, a researcher at IMT Mines Alès, a partner in the project.

What was the overall context in which the IMPETUS project was developed?

Gilles Dusserre The IMPETUS project was the result of my encounter with Matthieu Branlat, the scientific coordinator of IMPETUS, who is a researcher at SINTEF (Norwegian Foundation for Scientific and Industrial Research) which supports research and development activities. Matthieu and I have been working together for many years. As part of the eNOTICE European project, he came to take part in a use case organized by IMT Mines Alès on health emergencies and the resilience of hospital organizations. Furthermore, IMPETUS is the concrete outcome of efforts made by research teams at Télécom SudParis and IMT Mines Alès for years to promote joint R&D opportunities between IMT schools.

What are the security issues in smart cities?

GD A smart city can be described as an interconnected urban network of sensors, such as cameras and environmental sensors; it offers a multitude of valuable big data. In addition to better managing traffic and public transport and controlling pollution, this data allows for better police surveillance, adequate crowd control. But these smart systems increase the risk of unethical use of personal data, in particular given the growing use of AI (artificial intelligence) combined with video surveillance networks. Moreover, they increase the attack surface for a city since several interconnected IoT (Internet of Things) and cloud systems control critical infrastructure such as transport, energy, water supply and hospitals (which play a central role in current problems). These two types of risks associated with new security technologies are taken very seriously by the project: a significant part of its activities is dedicated to the impact of the use of these technologies on operational, ethical and cybersecurity aspects. We have groups within the project and external actors overseeing ethical and data privacy issues. They work with project management to ensure that the solutions we develop and deploy adhere to ethical principles and data privacy regulations. Guidelines and other decision-making tools will also be developed for cities to help them identify and take into account the ethical and legal aspects related to the use of intelligent systems in security operations.

What is the goal of IMPETUS?

GD In order to respond to these increasing threats for smart cities, the IMPETUS project will develop an integrated toolbox that covers the entire physical and cybersecurity value chain. The tools will advance the state of the art in several key areas such as detection (social media, web-based threats), simulation and analysis (AI-based tests) and intervention (human-machine interface and eye tracking, optimization of the physical and cyber response based on AI). Although the toolbox will be tailored to the needs of smart city operators, many of the technological components and best practices will be transferable to other types of critical infrastructure.

What expertise are researchers from IMT schools contributing to the project?  

GD The work carried out by Hervé Debar‘s team at Télécom SudParis, in connection with researchers at IMT Mines Alès, resulted in the creation of the overall architecture of the IMPETUS platform, which will integrate the various modules of smart city as proposed in the project. Within this framework, the specification of the various system components, and the system as a whole, will be designed to meet the requirements of the final users (cities of Oslo and Padua), but also to be scalable to future needs.

What technological barriers must be overcome?

GD The architecture has to be modular, so that each individual component can be independently upgraded by the provider of the technology involved. The architecture also has to be integrated, which means that the various IMPETUS modules can exchange information, thereby providing significant added value compared to independent smart city and security solutions that work as silos.  

To provide greater flexibility and efficiency in terms of collecting, analyzing, storing and access to data, the IMPETUS platform architecture will combine IoT and cloud computing approaches. Such an approach will reduce the risks associated with an excessive centralization of large amounts of smart city data and is in line with the expected changes in communication infrastructure, which will be explored at a later date.  

This task will also develop a testing plan. The plan will include the prerequisites, the execution of tests, and the expected results. The acceptance criteria will be defined based on the priority and percentage of successful test cases. In close collaboration with the University of Nimes, IMT Mines Alès will work on innovative approach to environmental risks, in particular related to chemical or biological agents, and to hazard assessment processes.

The consortium includes 17 partners and 11 EU member states and associated countries. What are their respective roles?

GD The consortium was formed to bring together a group of 17 organizations that are complementary in terms of basic knowledge, technical skills, ability to create new knowledge, business experience and expertise. The consortium comprises a complementary group of academic institutions (universities) and research organizations, innovative SMEs, industry representatives, NGOs and final users.

The work is divided into a set of interdependent work packages. It involves interdisciplinary innovation activities that require a high level of collaboration. The overall strategy consists of an iterative exploration, an assessment and a validation, involving the final users at every step.

[1] This project receives funding from Horizon 2020, the European Union’s Framework Programme for Research and Innovation (H2020) under grant agreement N° 883286. Learn more about IMPETUS.

nucléaire

Three Mile Island, Chernobyl, Fukushima: the role of accidents in nuclear governance

Stéphanie TillementIMT Atlantique – Institut Mines-Télécom and Olivier BorrazSciences Po

Until the 1970s, nuclear power plants were considered to be inherently safe, by design. Accidents were perceived as being highly unlikely, if not impossible, by designers and operators, in spite of recurring incidents that were not publicized.

This changed abruptly in 1979 with the Three Mile Island (TMI) accident in the United States. It was given wide media coverage, despite the fact that there were no casualties, and demonstrated that what were referred to as “major” accidents were possible, with a meltdown in this case.

The decades that followed have been marked by the occurrence of two other major accidents rated as level 7 on the INES (International Nuclear Event) scale: Chernobyl in 1986 and Fukushima in 2011.

Turning point in the 1980s

This article will not address this organization or the invention, in the wake of the Chernobyl accident, of the  INES scale used to rank events that jeopardize safety on a graduated scale, ranging from a deviation from a standard to a major accident.

Our starting point will be the shift that occurred in 1979, when accidents changed from being seen as unconceivable to a possible event, considered and described by nuclear experts as an opportunity for learning and improvement.  

Accidents therefore provide an opportunity to “learn lessons” in order to enhance nuclear safety and strive for continuous improvement.

But what lessons precisely? Has the most recent accident, Fukushima, led to profound changes in nuclear risk governance, as Chernobyl did?

The end of the human error rationale

Three Mile Island is often cited as the first nuclear accident: despite the technical and procedural barriers in place at the time, the accident occurred – such an accident was therefore possible.

Some, such as sociologist Charles Perrow, even described it as “normal,” meaning inevitable, due to the complexity of nuclear facilities and their highly coupled nature – meaning that the components that make up the system are closely interconnected – which are likely to lead to hard-to-control “snowball effects.”

For institutional, industrial and academic experts, the analysis of the accident changed views on man’s role in these systems and on human error: accidents went from being a moral problem, attributable to humans’ “bad behavior”, to a systematic problem, attributable to poor system design.

Breaking with the human error rationale, these lessons paved the way for the systematization of learning from experience, promoting a focus on transparency and learning.  

Chernobyl and risk governance

It was with Chernobyl that accidents became “organizational,” leading nuclear organizations and public authorities to introduce structural reforms of safety doctrines, based on recognition of the essential nature of “organizational and cultural problems […] for the safety of operations.” (AIEA, 1999).

Chernobyl also marked the beginning of major changes in risk governance arrangements at the international, European and French levels. An array of organizations and legal and regulatory provisions were introduced, with the twofold aim of learning from the accident that occurred at the Ukrainian power plant and preventing such an accident from happening elsewhere.

The law of 13 June 2006 on “Nuclear Transparency and Safety” (referred to as TSN) proclaiming, among other things, the ASN’s status as an administrative authority independent from the government, is one emblematic example.

A possibility for every country

25 years after Chernobyl, Japan experienced an accident at its Fukushima-Daiichi power plant.

Whereas the accident that occurred in 1986 could be attributed in part to the Soviet regime and its RBMK technology, the 2011 catastrophe involved American-designed technology and a country that many considered to be at the forefront of modernity.

With Fukushima, a serious accident once again became a possibility that no country could rule out. And yet, it did not give rise to the same level of mobilization as that of 1986.  

Fukushima – a breaking point?

Ten years after the Japanese catastrophe, it can be said that it did not bring about any profound shifts – whether in the way facility safety is designed, managed and monitored, or in the plans and arrangements designed to manage a similar crisis in France (or in Europe).

This has been shown in the research carried out through the Agoras research project.

As far as preparedness for crisis management is concerned, Fukushima led to a re-examination of the temporal boundaries between the emergency phase and the post-accident phase, and for greater investment in the latter.

This catastrophe also led the French authorities to publish a preparedness plan in 2014 for managing a nuclear accident, making it a part of the common crisis management system.

These two aspects are reflected in the strengthening of the public safety portion of the national crisis management exercises carried out annually in France.   

But, as underscored by recent research, the observation of these national exercises did not reveal significant changes, whether in the way they are organized and carried out, the content of plans and arrangements, or, more generally, in the approach to a crisis caused by a major accident – with the exception of the creation of national groups that can intervene quickly on site (FARN).

Limited changes

It may, of course, be argued that, like the effects of the Three Mile Island and Chernobyl accidents, structural transformations take time and it may still be too early to observe a lack of significant change.

But the research carried out through the Agoras project leads us to put forward the hypothesis that changes remain limited, based on two reasons.

The first reason comes from the fact that structural changes were initiated in the 20 years following the Chernobyl  accident. This period saw the rise of organizations dedicated to accident prevention and crisis management preparedness, such as the ASN in France, and European (WENRA, ENSREG) and international cooperation organizations.

These organizations initiated continuous research on nuclear accidents, gradually developing tools for  understanding and responding to accidents, as well as mechanisms for coordination between public officials and industry leaders at the national and international levels.

These tools were “activated” following the Fukushima accident and made it possible to quickly provide an explanation for the accident, launch shared procedures such as supplementary safety assessments (the  much-discussed “stress tests”), and collectively propose limited revisions to nuclear safety standards.

This work contributed to normalizing the accident, by bringing it into existing organizations and frameworks for thinking about nuclear safety.

This helped establish the conviction, among industry professionals and French public authorities, that the  governance regime in place was capable of preventing and responding to a large-scale event, without the need to profoundly reform it.

The inertia of the French system

A second reason comes from the close relationships in France between the major players in the civil nuclear sector (operators – EDF primarily – and regulators – the ASN and its technical support organization IRSN), in particular with regard to establishing and assessing safety measures at power plants.

These relationships form an exceptionally stable organized action system. The Fukushima accident provided a short window of opportunity to impose additional measures on operators.

Read more: L’heure des comptes a sonné pour le nucléaire français (Time for a Reckoning in the French Nuclear Industry)

But this window closed quickly, and the action system returned to a stable state. The inertia of this system can be seen in the production of new regulatory instruments, the development and upgrading of which take several years.   

It can also be seen in the organization of crisis management exercises, which continue to perpetuate distinctions between safety and security, accident and crisis, the facility interiors and the environment, and more generally, between technical and political considerations – distinctions that preserve the structure and content of relationships between regulators and operators.

Learning from accidents

Like Chernobyl, Fukushima was first viewed as an exceptional event: by insisting on the perfect storm of a tsunami of unprecedented magnitude and a nuclear power plant, highlighting the lack of an independent regulatory agency in Japan, insisting on the excessive respect for hierarchy among the Japanese, the aim was to construct a unique event so as to suggest that it could not happen in the same way in other parts of the world.

But, at the same time, a normalization process took place, in France in particular, focusing not as much on the event itself, as on the risks it posed for the organization of the nuclear industry, meaning stakeholders and forms of knowledge with legitimacy and authority.

The normalization process led to the accident being included in the existing categories, institutions and systems, in order to demonstrate their ability to prevent such an accident from happening and to limit the impact, should such an accident occur.

This was the result of efforts to delineate the boundaries, with some parties seeking to maintain them and others disputing them and trying to change them.

Ultimately, the boundaries upheld so strongly by industry stakeholders (operators and regulators) – between technical and political considerations, between experts and laymen – were maintained.

Relentlessly questioning nuclear governance

While the Fukushima accident was taken up by political and civil society leaders to challenge the governance of the nuclear industry and its “closed-off” nature, operators and regulators in France and throughout Europe quickly took steps to demonstrate their ability both to prevent such an accident, and to manage the consequences, in order to suggest that they could continue to be entrusted with regulating this sector.

As far as making the sector more open to civil society players is concerned, this movement was initiated well before the Fukushima accident (with the TSN Law in 2006, notably), and was, at best, the continuation of a pre-existing trend.

But other boundaries seem to have emerged or been strengthened in recent years, especially between technical factors and human and organizational factors, or safety requirements and other requirements for nuclear organizations (economic and industrial performance in particular), although it is not exactly clear whether this is related to the accidents.

These movements go hand in hand with a bureaucratization of relationships between the regulator and its technical expert, and between these two parties and operators, and require further research in order to investigate their effects on the foundations of nuclear risk governance.

Talking and listening to one another

As like causes produce like effects, it is indeed the fact that the nuclear industry is unreceptive to any “uncomfortable knowledge” – based on the idea introduced by Steve Rayner – that is the problem.

Social science research has long demonstrated that in order to solve complex problems, a wide range of individuals from various backgrounds and training must be brought together, for research that transcends disciplinary and institutional boundaries.

Social science researchers, engineers and public authorities must talk to – and more importantly – listen to one another. For engineers and policy-makers, that means being ready to take into account facts or knowledge that may challenge established doctrines and arrangements and their legitimacy.  

And social science researchers must be ready to go and see nuclear organizations, to get a first-hand look at their day-to-day operations, listen to industry stakeholders and observe working situations.

But our experience, in particular through Agoras, has shown us that not only is such work time-consuming and costly, it is also fraught with pitfalls. For even when one stakeholder does come to see the soundness of certain knowledge, the highly interconnected nature of relationships with other industry stakeholders, who make up the governance system, complicates the practical implementation of this knowledge, and therefore prevents major changes from being made to governance arrangements.

Ultimately, the highly interconnected nature of the nuclear industry’s governance system is arguably one of the vulnerabilities.  

Stéphanie Tillement, Sociologist, IMT Atlantique – Institut Mines-Télécom and Olivier Borraz, CNRS Research Director – Centre for the Sociology of Organisations, Sciences Po

This article has been republished from The Conversation under a Creative Commons license. Read the  original article (in French).

flux des conteneurs

DMS Logistics is optimizing inland container transportation

The inland container logistics chain suffers from low digitization, which limits organization and communication between the various parts of the chain. To overcome this problem, the start-up DMS Logistics, incubated at Mines Saint-Étienne, is developing a platform to optimize management of these flows of goods. It uses machine learning methods to automate the creation of schedules, reduce congestion at terminals and boost the competitiveness of ports and their logistics networks.

Ports’ ability to streamline the transfer of goods impacts their competitiveness in a sector where the competition is fierce. Yet, one of the major problems for this type of infrastructure is their permanent congestion. This is explained in part by the physical capacity to receive trucks at terminals, but is also due to the lack of anticipation in the exchange of containers with carriers. This results in costly slowdowns and detention charges for every delay. At the heart of the problem is a lack of communication between the various participants in the same supply chain: terminals, road carriers, container parks, ship-owners, freight forwarders etc.

To help overcome this problem, the start-up DMS Logistics seeks to bring together all of these participants through a platform to optimize and anticipate the inland flow of containers. “This is a major market with 800 million containers exchanged every year worldwide,” explains one of the three founders, Xavier Des Minières, a specialist in inland logistics. Using operational data from each participant in the chain, the start-up has successfully optimized the overall supply chain rather than an individual part of the chain. This software solution therefore achieves the goals of the French government’s strategic plan “France Logistics 2025” initiated in 2016 to make national logistics more efficient and attractive.

Digitizing companies that work with containers

The logistics sector is still little digitized. It is made up of many SMEs with very small profit margins who cannot afford to buy digital tools to manage their operations,” explains Xavier Des Minières. DMS Logistics is solving this problem by equipping these users digitally and adapting to their resources. However, the solution becomes more useful when it groups together all the parts of the supply chain. To do so, the company is targeting the terminals around which all the other inland transportation participants revolve.

DMS Logistics’ solution is a distributed cloud-based SaaS (Software as a Service) platform. It enables users to enter their operational data online: container movement, missions to accomplish or already carried out etc. For participants that have already gone digital, the service connects to their data through an API (Application Programming Interface) protocol. Since it was founded in 2020, the start-up has collected 700,000 container movements. This massive amount of data will feed its machine learning algorithms. “We’re automating three key time-consuming actions: managing operations schedules, making appointments at terminals and communication between partners,” says Xavier Des Minières.

Predicting flows based on data history

Why does this sector need to be automated? In the field, many participants in the chain respond to management difficulties in real time, using walkie-talkies or over the phone. They are constantly dealing with seemingly unforeseen difficulties. “However, we have shown that there is a lot of redundancy in the operational behavior of the various participants. The difficulties are therefore predictable and our algorithms make it possible to anticipate them,” explainsCyriac Azefack, a data scientist at DMS Logistics who holds a PhD in artificial intelligence.

The prediction is even more accurate when it cross-references data from the various participants. For example, carriers can optimize drivers’ schedules based on times for appointments offered by terminals to pick up goods. Furthermore, carriers’ behavior (history of their operations, inventory movement etc.) can be used to identify appropriate time slots for these appointments. Carriers can then access the terminal when it is convenient for them to do so and when it is not crowded. This seemingly simple organization was not possible before now.

An even higher level of optimization can be reached. “Still using carriers’ behavioral data, we identify the drivers and trucks that are most suited for a mission (local, long distance, etc.),” adds Taki-Eddine Korabi, a data scientist at DMS Logistics who holds a PhD in mathematics, computer science and automation. Ultimately, the overall optimization of an ecosystem results in better local management.

Towards the optimization of local logistics ecosystems

DMS Logistics’ solution is distributed in the Ivory Coast and in Marseille, where a team of 12 people are  based. “After 4 months of operations at our pilot facility, we can predict the arrival of trucks at a terminal with a reliability rate of 98% over a week,” explains Xavier Des Minières. For the terminal, this means 15% savings in resources. Moreover, when a port is efficient, it boosts the attractiveness of the entire region. The economic benefits are therefore wide-ranging.

Another key finding: optimizing flow at the terminals also helps the ports in their efforts toward ecological transition. More efficient organization means less unnecessary transportation, a reduction in traffic at ports, and therefore less local pollution. And air quality is improved. 

On the scientific side, research has only focused on optimizing container carriers’ operations since 2015 and on-the-ground information is still lacking. “We’re going to be starting a Cifre PhD with Mines Saint-Étienne which will rely on the use of data collected by our platform. That will allow us to explore this topic in an optimal way and offer bright prospects for research and the logistics business,” concludes Taki-Eddine Korabi.

By Anaïs Culot

Read on I’MTech: AlertSmartCity, Cook-e, Dastra, DMS, GoodFloow, JobRepublik, PlaceMeet and Spectronite supported by the “honor loan” scheme