Start-up, prêts d'honneur

Pam Tim, Examin, Cylensee and Possible supported through honor loan program

The members of the IMT Digital Fund, IGEU, IMT and Fondation Mines-Télécom met on 6 April. On this occasion, four start-ups developed through incubators at IMT Atlantique, Télécom Paris, Télécom SudParis and Institut Mines-Télécom Business School obtained 8 honor loans for a total of €160,000.

L’attribut alt de cette image est vide, son nom de fichier est Logo_Cylensee.png.

Cylensee (IMT Atlantique incubator) develops and produces connected electrochromic contact lenses for  the general public. These contact lenses have a feature that allows users to change the color of their iris almost instantly at their convenience. Activated by a remote control or via a smartphone, these lenses allow users to change their eye color with just one click, whether to stand out from the crowd, try out a new look, make an impression or just for fun.
• Two €20,000 honor loans • 

L’attribut alt de cette image est vide, son nom de fichier est Logo_Examin.jpg.

The Examin platform (Télécom Paris Novation Center) is a regulatory and technical compliance management solution for companies with a focus on cybersecurity and data protection. Using a collaborative and scalable workspace, customers benefit from continuous reporting on their compliance or that of their suppliers and can easily involve employees in their actions to reduce compliance risks.
• Two €20,000 honor loans • 
Learn more

Pam Tim (Télécom Paris Novation Center) specializes in the well-being of children aged 3-6 by providing them with an opportunity to intuitively learn the spatial and temporal reference points that structure the day using a watch without numbers or hands! This life assistant for children relies on a patented display of combinations of pictograms (PhD thesis) depicting key moments throughout the day. This connected watch also gives parents peace of mind as it allows them to anticipate household or peripheral risks their children may encounter at any moment through a very low-power Bluetooth© geofencing solution.
• Two €20,000 honor loans • 
Learn more

L’attribut alt de cette image est vide, son nom de fichier est Logo_Possible.png.

Possible (IMT Starter, the Télécom SudParis et IMT-BS incubator) is a project that encourages circular, environmentally-friendly, zero-waste, ethical fashion. Possible is a BtoC platform for renting clothes and accessories based on a monthly subscription. The subscription allows users to rent a selection of several pieces by brands that promote ethical and responsible practices, for a set cost. This project responds to the issue: How can individuals enjoy an unlimited wardrobe on a limited budget and in an environmentally-friendly way?
• Two €20,000 honor loans • 
Learn more

digital simulation

Digital simulation: applications, from medicine to energy

At Mines Saint-Étienne, Yann Gavet uses image simulation to study the characteristics of an object. This method is more economical in terms of time and cost, and eliminates the need for experimental measurements. This field, at the intersection of mathematics, computer science and algorithms, is used for a variety of applications ranging from the medical sector to the study of materials.

What do a human cornea and a fuel cell electrode have in common? Yann Gavet, a researcher in applied mathematics at Mines Saint-Étienne1 is able to model these two objects as 2D or 3D images in order to study their characteristics. To do this, he uses a method based on random fields. “This approach consists in generating a synthetic image representing a surface or a random volume, i.e. whose properties will vary from one point to another across the plane or space,” explains the researcher. In the case of a cornea, for example, this means visualizing an assembly of cells whose density differs according to whether we look at the center or the edge. The researcher’s objective? To create simulations with properties as close as possible to the reality.

Synthetic models and detecting corneal disorders

The density of cells that make up our cornea –the transparent part at the front of the eye– and its endothelium, provides information about its health. To perform these analyses, automatic cell detection and counting algorithms have been developed using deep neural networks. Training them thus requires access to large databases of corneas. The problem is that these do not exist in sufficient quantity. “However, we have shown that it is possible to perform the training process using synthetic images, i.e. simulated by models,” says Yann Gavet.

How does it work? Using deep learning, the researcher creates graphical simulations based on key criteria: size, shape, cell density or the number of neighboring cells. He is able to simulate cell arrangements, as well as complete and realistic images of corneas. However, he wants to combine the two. Indeed, this step is essential for the creation of image databases that will allow us to train the algorithms. He focuses in particular on the realism of the simulation results in terms of cell geometry, gray levels and the “natural” variability of the observations.

Although he demonstrated that training using synthetic corneal data does not require perfectly realistic representations to perform well, improving accuracy will be useful for other applications. “As a matter of fact, we transpose this method to the simulation of material arrangements that compose fuel cell electrodes, which requires more precision,” explains the researcher.

Simulating the impact of microstructures on the performance of a fuel cell

The microstructure of fuel cell electrodes impacts the performance and durability of solid oxide cells. In order to improve these parameters, researchers want to identify the ideal arrangement of the materials that make up the electrodes, i.e., how they should be distributed and organized. To do this, they play with the “basic” geometry of an electrode: its porosity and its material particle size distribution. This therefore targets the morphological parameters on which the manufacturers intervene when designing the electrodes.

To identify the best performing structures, one method would be to build and test a multitude of configurations. This is an expensive and time-consuming practice. The other approach is based on the simulation and optimization of a large number of configurations. Subsequently, a second group of models simulating the physics of a battery can in turn identify which structures best impact the battery’s performance.

The advantage of the simulations is that they target specific areas within the electrodes to better understand their operation and their overall impact on the battery. For example: exchange zones such as “triple phase” points where ionic, electronic and gaseous phases meet, or exchanges between material surfaces. “Our model allows us to evaluate the best configuration, but also to identify the associated manufacturing process that offers the best energy efficiency for the battery,” says Yann Gavet.

In the medium term, the researcher wishes to continue his work on a model whose dimensions are similar to the observations made in X-ray tomography. An algorithmic challenge that will require more computing time, but will also lead to results that are closer to the reality of the field.

1 Yann Gavet is a researcher at the Georges Friedel laboratory, UMR CNRS/Mines Saint-Étienne

Anaïs Culot

SONATA

SONATA: an approach to make data sound better

Telecommunications must transport data at an ever-faster pace to meet the needs of current technologies. But this data can be voluminous and difficult to transport at times. Communication channels are congested and transmission limits are reached quickly. Marios Kountouris, a telecommunications researcher at EURECOM, has recently received ERC funding to launch his SONATA project. It aims to shift the paradigm for processing information to speed up its transmission and make future networks more efficient.

We are close to the fundamental limit for transmitting data, from one point to another,” explains Marios Kountouris, a telecommunications researcher at EURECOM. Most of the current research in this discipline focuses on how to organize complex networks and on improving the algorithms that optimize these networks. Few projects, however, focus on improving the transfer of data between transmitters and receivers. This is precisely the focus of Marios Kountouris’ SONATA project, funded by a European ERC consolidator grant.

Telecommunications are generally based on Shannon’s information theory, which was established in the 1950s,” says the researcher. In this theory, a transmitter simply sends information through a transmission channel, which models it and transfers it to a receiver which then reconstructs it. The main obstacle to get around is the noise accompanying the signal when it passes through the transmission channel. This constraint can be overcome by algorithm-based signal processing and by increasing throughput. “This usually takes place in the same way, regardless of the message being transmitted. Back in the early days, and until recently, this was the right approach,” says the researcher.

Read more on I’MTech: Claude Shannon, a legacy transcending digital technology

Transmission speed for real-time communication

Today, there is an increasing amount of communication between machines that reason in milliseconds. “Certain messages must be transmitted quickly or they’re useless,” says Marios Kountouris. For example, in the development of autonomous cars, if the message collected relates to the detection of a pedestrian on the road so as to make the vehicle brake, it is only useful for a very short period of time. “This is what we call the age, or freshness of information, which is a very important parameter in some cases,” explains Marios Kountouris.

Yet, most transmission and reconstruction is slowed down by surplus information accompanying the message. In the previous example, if the system for detecting pedestrians is a camera that captures images with details about all the surrounding objects, a great deal of the information in the transmission and processing will not contribute to the system’s purpose. For the researcher, “the sampling, transmission and reconstruction of the message must no longer be carried out independently of another. If excess, redundant or useless data accompanies this process, there can be communication bottlenecks and security problems.”  

The semantics of messages

For real-time communication, the semantics of the message  — its meaning and usefulness— take on particular importance. Semantics make it possible to take into account the attributes of the message and adjust the format of its transmission depending on its purpose. For example, if a temperature sensor is meant to activate the heating system automatically when the room temperature is below 18° C, the attribute of the transmitted message is simply a binary breakdown of temperature: above or below 18°C.

Through the SONATA project, Marios Kountouris seeks to develop a new communication paradigm that takes the semantic value of information into account. This would make it possible to synchronize different types of information collected at the same time through various samples and make more optimal decisions. It would also significantly reduce the volume of transported data as well as the associated energy and resources required.

The success of this project depends on establishing semantic metrics that are concrete, informative and traceable,” explains the researcher. Establishing the semantics of a message means preprocessing sampling by the transmitter depending on how it is used by the receiver. The aim is therefore to identify the most important, meaningful or useful information in order to determine the qualifying attributes of the message. “Various semantic attributes can be taken into account to obtain a conformal representation of the information, but they must be determined in advance, and we have to be careful not to implement too many attributes at once,” he says.

The goal, then, is to build communication networks with key stages for processing the semantics associated with information. First, semantic filters must be used to avoid unnecessary redundancy when collecting information. Then, semantic preprocessing must be carried out in order to associate the data with its purposes. Signal reconstruction by the receiver would also be adapted to its purposes. All this would be semantically-controlled, making it possible to orchestrate the information collected in an agile way and reuse it efficiently, which is especially important when networks become more complex.

This is a new approach from a structural perspective and would help create links between communication theory, sampling and optimal decision-making. ERC consolidator grants fund high-risk, high-reward projects that aim to revolutionize a field, which is why SONATA has received this funding. “The sonata was the most sophisticated form of classical music and was pivotal to its development. I hope that SONATA will be a major step forward in telecommunications optimization,” concludes Marios Kountouris.

By Antonin Counillon

Reconnecting data to the sensory world of human beings: a challenge for industry 4.0 already taken up by SMEs

Gérard Dubey, Institut Mines-Télécom Business School and Anne-Cécile Lafeuillade, Conservatoire national des arts et métiers (CNAM)

Given the magnitude of uncertainty and risk of disruption threatening the economic and social order, the digitization of productive activities is often presented as a panacea.

Whether it’s a question of industrial production, creating new jobs or reclaiming lost productivity, the narrative supporting industry 4.0 focuses only on the seemingly infinite potential of digital solutions.

Companies that are considered to be active in the digital sector are upheld as trailblazers that will drive recovery. The Covid crisis has only accentuated this trend, which already appeared in the industry of the future programs.

Automated data captures downstream in the production process (with cameras, sensors, information extraction at each workstation) and their algorithmic processing upstream (big data, data science) hold the promise of “agile” management (precise, flexible, personalized) in real production time – something every industrial process strives for.

Nevertheless, this digital transformation seems to have forgotten two key facts: a company is first and foremost a group of human beings that cannot be reduced to numerical targets or abstract productivity criteria. And more importantly for industry, the relationship with the material is still a crucial dimension, which unites work teams and gives them meaning.

As such, there is something of a disconnect – which is only growing – between the stated ambitions of major industrial players and the realities on the ground.

The relationship with the material at industrial SMEs

From this perspective, although their role in (incremental) innovation is all too often overlooked and poorly understood, industrial SMEs have a lot to teach us. This is mainly due to the kind of specific relationships they continue to maintain with the material, if this is understood as a reality concerned as much with human aspects (motions, experiential knowledge, sense knowledge) as physical aspects (measurable). As they are rooted in their local communities and have withstood the test of time, they are accustomed to developing, arranging and organizing heterogeneous expertise and modes of intelligence about reality.

The surveys conducted in many industrial SMEs by a multidisciplinary research team show how important this relationship is to their directors. This can be seen in a number of aspects and affirms that their decisions are rooted in the reality of the situation.

When the CEO of Maison Caron digitized its site and moved to Saclay in 2019, she did not do away with the “old” coffee roaster from the 1950s. Coffee roasting may be rooted in reality and the senses, but the magic of aromas happens because the nose, eyes and even ears know how to control it – traditional know-how passed down through her family that she now shares with some employees of the company.

At Guilbert Express, another family business that makes high-end welding equipment, the director has observed a progressive loss of know-how in France, following the strategy to offshore export-oriented production in recent years. By going digital, he hopes to unite scattered work teams based on a shared, intercultural experience.

At Avignon Céramic, a company in Berry that makes ceramic cores for the aeronautical industry, quality comes down to daily interactions with the material. And this material – inherently unstable, unpredictable, a source of variability and uncertainty, almost “living” and virtually independent – in turn requires know-how that is itself living, precise and agile, to make a final product that is an acceptable part for the supply chain of major manufacturers.

In industry, human expertise makes it possible to better understand the material. Shutterstock

This is particularly apparent in Opé40, one of the key steps of the quality processes implemented to identify defects in the ceramic cores. This visual and tactile inspection identifies infinitesimal details and requires extensive expertise. But this step is also decisive in establishing collective knowledge and building a work team: while some employees are responsible for detecting defects, everyone works together to use these traces to discover the meaning, similar to a police investigation.

It is through this relationship with material that the work community is brought together. From this perspective, SMEs appear to possess what may be one of the best-kept industrial secrets: how human beings and material contribute to a shared transformation process.

While traceability and numerical data analysis systems play a growing role in the organization of work by companies seeking to harness this human expertise of the material – which is sometimes passed down through generations – the challenge is to integrate these transformations without giving up this culture.   

Humans – the key to adaptation

The director of AQLE, a company located in the North of France that specializes in electronic assemblies, raises questions about the risks posed by loss of meaning among employees if part of their activity is carried out by digital technology. To what extent is it possible to eliminate movements that are considered to be tedious without ultimately affecting the activity in its entirety – meaning developing, maintaining and acquiring expertise (training, learning, ways of passing it down)?

Similarly, the generational gap observed in the use of digital technology is often highlighted (in documents encouraging this transformation) to express the idea that younger employees could become mentors for older employees and act as intermediaries for the digital transformation of a company. But once again, the problem is more interesting and complicated than that.

Training only the oldest employees is not enough to ensure a successful digital transformation. Shutterstock

First of all, there is a need to develop new relationships and balance between the concrete (sensory, manual etc.) and digital world. From this perspective, the archaic/innovative dichotomy (often echoed in the cognitive/manual one) appears to be futile. It is the handing over of practices that matters, and not the “disruptive” approach, which more often than not results in approaches that are out of step with realities on the ground. The entire purpose of digital technology is precisely to urge us to question our forms of attachment to work.

One of the challenges of a successful “digital transition” will undoubtedly be to manage to combine or reconcile these different ways of acting on reality in a complementary manner – rather than through an either/or approach. It must be accepted in advance that the information obtained by one method or another is of a different nature. Digital processing of data cannot replace knowledge of the material, which relies on humans’ propensity to sense that which, like themselves, is living, fragile and impermanent.  

Humans’ familiarity with living material, far from being obsolete, may well be one of the keys to adapting to the upheavals taking place and those yet to come. The Covid crisis has shattered certainties and upended strategies. The time has come to remember that human expertise, and the collective memory on which it is founded, are not merely variables to be adjusted, but the very condition for agility, which is increasingly required in a globalized economy marked by uncertainty.  

Gérard Dubey, Sociologist, Institut Mines-Télécom Business School and Anne-Cécile Lafeuillade, PhD student in ergonomics, Conservatoire national des arts et métiers (CNAM)

This article has been republished from The Conversation under a Creative Commons license. Read the original article (in French).

Comprendre informations du langage, algorithms

Making algorithms understand what we are talking about

Human language contains different types of information. We understand it all unconsciously, but explaining it systematically is much more difficult. The same is true for machines. The NoRDF Project Chair “Modeling and Extracting Complex Information from Natural Language Text” seeks to solve this problem: how can we teach algorithms to model and extract complex information from language? Fabian Suchaneck and Chloé Clavel, both researchers at Telecom Paris, explain the approaches of this new project

What aspects of language are involved in making machines understand?

Fabian Suchaneck: We need to make them understand more complicated natural language texts. Current systems can understand simple statements. For example, the sentence: “A vaccine against Covid-19 has been developed” is simple enough to be understood by algorithms. On the other hand, they cannot understand sentences that go beyond a single statement, such as: “If the vaccine is distributed, the Covid-19 epidemic will end in 2021. In this case, the machine does not understand that the condition required for the Covid-19 epidemic to end in 2021 is that the vaccine is distributed. We also need to make machines understand what emotions and feelings are associated with language; this is Chloé Clavel’s specialist area.

What are the preferred approaches in making algorithms understand natural language?

FS: We are developing “neurosymbolic” approaches, which seek to combine symbolic approaches with deep learning approaches. Symbolic approaches use human-implemented logical rules that simulate human reasoning. For the type of data we process, it is fundamental to be able to interpret what has been understood by the machine afterwards. Deep learning is a type of automatic learning where the machine is able to learn by itself. This allows for greater flexibility in handling variable data and the ability to integrate more layers of reasoning.

Where does the data you analyze come from?

FS: We can collect data when humans interact with chatbots from a company and especially those from the project’s partner companies. We can extract data from comments on web pages, forums and social networks.

Chloé Clavel: We can also extract information about feelings, emotions, social attitudes, especially in dialogues between humans or humans with machines.

Read on I’MTech: Robots teaching assistants

What are the main difficulties for the machine in learning to process language?

CC: We have to create models that are robust in changing contexts and situations. For example, there may be language variability in the expression of feelings from one individual to another, meaning that the same feelings may be expressed in very different words depending on the person. There is also a variability of contexts to be taken into account. For example, when humans interact with a virtual agent, they will not behave in the same way as with a human, so it is difficult to compare data from these different sources of interactions. Yet, if we want to move towards more fluid and natural human-agent interactions, we must draw inspiration from the interactions between humans.

How do you know whether the machine is correctly analyzing the emotions associated with a statement?

CC: The majority of the methods we use are supervised. The data entered into the models are annotated in the most objective way possible by humans. The goal is to ask several annotators to annotate the emotion they perceive in a text, as the perception of an emotion can be very subjective. The model is then taught about the data for which a consensus among the annotators could be found. When testing the performance of the model, when we inject an annotated text into a model that has been trained with similar texts, we can see if the annotation it produces is close to those determined by humans.

Since the annotation of emotions is particularly subjective, it is important to determine how the model actually understood the emotions and feelings present in the text. There are many biases in the representativeness of the data that can interfere with the model and mislead us on the interpretation made by the machine. For example, if we assume that younger people are angrier than older people in our data and that these two categories do not express themselves in the same way, then it is possible that the model may end up simply detecting the age of the individuals and not the anger associated with the comments.

Is it possible that the algorithms end up adapting their speech according to perceived emotions?

CC: Research is being conducted on this aspect. Chatbots’ algorithms must be relevant in solving the problems they are asked to solve, but they must also be able to provide a socially relevant response (e.g. to the user’s frustration or dissatisfaction). These developments will improve a range of applications, from customer relations to educational or support robots.

What contemporary social issues are associated with the understanding of human language by machines?

FS: This would notably allow a better understanding of the perception of news on social media by humans, the functioning of fake news, and therefore in general which social group is sensitive to which type of discourse and why. The underlying reasons why different individuals adhere to different types of discourse are still poorly understood today. In addition to the emotional aspect, there are different ways of thinking that are built in argumentative bubbles that do not communicate with each other.

In order to be able to automate the understanding of human language and exploit the numerous data associated with it, it is therefore important to take as many dimensions into account as possible, such as the purely logical aspect of what is said in sentences and the analysis of the emotions and feelings that accompany them.

By Antonin Counillon

Digital innovations in health

Innovation in health: towards responsibility

Digital innovations are paving the way for more accurate predictive medicine and a more resilient healthcare system. In order to establish themselves on the market and reduce their potential negative effects, these technologies must be responsible. Christine Balagué, a researcher in digital ethics at Institut Mines-Télécom Business School, presents the risks associated with innovations in the health sector and ways to avoid them.

Until now, the company has approached technology development without looking at the environmental and social impacts of the digital innovations produced. The time has come to do something about this, especially when it comes to human lives in the health sector”, says Christine Balagué, a researcher at Institut Mines-Telecom Business School and co-holder of the Good in Tech Chair [1]. From databases and artificial intelligence for detecting and treating rare diseases, to connected objects for monitoring patients; the rapid emergence of tools for prediction, diagnosis and also business organization is making major changes in the healthcare sector. Similarly, the goal of a smarter hospital of the future is set to radically change the healthcare systems we know today. The focus is on building on medical knowledge, advancing medical research, and improving care.

However, for Christine Balagué, a distinction must be made between the notion of “tech for good” – which consists of developing systems for the benefit of society – and “good in tech”. She says “an innovation, however benevolent it may be, is not necessarily devoid of bias and negative effects. It’s important not to stop at the positive impacts but to also measure the potential negative effects in order to eliminate them.” The time has come for responsible innovation. In this sense, the Good in Tech chair, dedicated to responsibility and ethics in digital innovations and artificial intelligence, aims to measure the still underestimated environmental and societal impacts of technologies on various sectors, including health.

Digital innovations: what are the risks for healthcare systems?

In healthcare, it is clear: an algorithm that cannot be explained is unlikely to be commercialized, even if it is efficient. Indeed, the potential risks are too critical when human lives are at stake. However, a study published in 2019 in the journal Science on the use of commercial algorithms in the U.S. health care system demonstrated the presence of racial bias in the results of these tools. This discrimination between patients, or between different geographical areas, therefore gives rise to an initial risk of unequal access to care. “The more automated data processing becomes, the more inequalities are created,” says Christine Balagué. However, machine learning is increasingly being used in the solutions offered to healthcare professionals.

For example, French start-ups such as Aiintense, incubated at IMT Starter, and BrainTale use it for diagnostic purposes. Aiintense is developing decision support tools for all pathologies encountered in intensive care units. BrainTale is looking at the quantification of brain lesions. These two examples raise the question of possible discrimination by algorithms. “These cases are interesting because they are based on work carried out by researchers and have been recognized internationally by the scientific peer community, but they use deep learning models whose results are not entirely explainable. This therefore hinders their application by intensive care units, which need to understand how these algorithms work before making major decisions about patients,” says the researcher.

Furthermore, genome sequencing algorithms raise questions about the relationship between doctors and their patients. Indeed, the limitations of the algorithm, the presence of false positives or false negatives are rarely presented to patients. In some cases, this may lead to the implementation of unsuitable treatments or operations. It is also possible that an algorithm may be biased by the opinion of its designer. Finally, unconscious biases associated with the processing of data by humans can also lead to inequalities. Artificial intelligence in particular thus raises many ethical questions about its use in the healthcare setting.

What do we mean by a “responsible innovation”? It is not just a question of complying with data processing laws and improving the health care professional’s way of working. “We must go further. This is why we want to measure two criteria in new technologies: their environmental impact and their societal impact, distinguishing between the potential positive and negative effects for each. Innovations should then be developed according to predefined criteria aimed at limiting their negative effects,” says Christine Balagué.

Changing the way innovations are designed

Liability is not simply a layer of processing that can be added to an existing technology. Thinking about responsible innovation implies, on the contrary, changing the very manner in which innovations are designed. So how do we ensure they are responsible? Scientists are looking for precise indicators that could result in a “to do list” of criteria to be verified. This starts with the analysis of the data used for learning, but also by studying the interface developed for the users, through the architecture of the neural network that can potentially generate bias. On the other hand, existing environmental criteria must be refined by taking into account the design chain of a connected object and the energy consumption of the algorithms. “The criteria identified could be integrated into corporate social responsibility in order to measure changes over time,” says Christine Balagué.

In the framework of the Good In Tech chair, several research projects, including a thesis, are being carried out on our capacity to explain algorithms. Among them, Christine Balagué and Nesma Houmani (a researcher at Télécom SudParis) are interested in algorithms for electroencephalography (EEG) analysis. Their objective is to ensure that the tools use interfaces that can be explained to health care professionals, the future users of the system. “Our interviews show that explaining how an algorithm works to users is often something that designers aren’t interested in, and that making it explicit would be a source of change in the decision-making process,” says the researcher. The ability to explain and interpret results are therefore two key words guiding responsible innovation.

Ultimately, the researchers have identified four principles that an innovation in healthcare must follow. The first is anticipation in order to measure the potential benefits and risks upstream of the development phase. Then, a reflexive approach allows the designer to limit the negative effects and to integrate into the system itself an interface to explain how the technological innovation works to physicians. It must also be inclusive, i.e. reaching all patients throughout the territory. Finally, responsive innovation facilitates rapid adaptation to the changing context of healthcare systems. Christine Balagué concludes: “Our work shows that taking into account ethical criteria does not reduce the performance of algorithms. On the contrary, taking into account issues of responsibility helps to promote the acceptance of an innovation on the market”.

[1] The Chair is supported by the Institut Mines-Télécom Business School, the School of Management and Innovation at Sciences Po, and the Fondation du Risque, in partnership with Télécom Paris and Télécom SudParis.

Anaïs Culot

Also read on I’MTech :

IoT, Internet of Things

A standardized protocol to respond to the challenges of the IoT

The arrival of 5G has put the Internet of Things back in the spotlight, with the promise of an influx of connected objects in both the professional and private spheres. However, before witnessing the projected revolution, several obstacles remain. This is precisely what researchers at IMT Atlantique are working on, and they have already achieved results of global significance.

The Internet of Things (IoT) refers to the interconnection of various physical devices via the Internet for the purpose of sharing data. Sometimes referred to as the “Web 3.0”, this field is set to develop rapidly in the coming years, thanks to the arrival of new networks, such as 5G, and the proliferation of connected objects. Its applications are infinite: monitoring of health data, the connected home, autonomous cars, real-time and predictive maintenance on industrial devices, and more.

Although it is booming, the IoT still faces major challenges. “We need to respond to three main constraints: energy efficiency, interoperability and security,” explains Laurent Toutain, a researcher at IMT Atlantic. But there is one problem: these three aspects can be difficult to combine.

The three pillars of the IoT

First, energy is a key issue for the IoT. For most connected objects, the autonomy of a smartphone is not sufficient. In the future, a household may have several dozen such devices. If they each need to be recharged every two or three days, the user will have to devote several hours to this task. And what about factories that could be equipped with thousands of connected objects? In some cases, these are only of value if they have a long battery life. For example, a sensor could be used to monitor the presence of a fire extinguisher at its location and send an alert if it does not detect one. If you have to recharge its battery regularly, such an installation is no longer useful.

For a connected object, communication features account for the largest share of energy consumption. Thus, the development of IoT has been made possible by the implementation of networks, such as LoRa or Sigfox, allowing data to be sent while consuming little energy.

The second issue is interoperability, i.e. the ability of a product to work with other objects and systems, both current and future. Today, many manufacturers still rely on proprietary universes, which necessarily limits the functionalities offered by the IoT. Take the example of a user who has bought connected light bulbs from two different brands. They will not be able to control them via a single application.

Finally, the notion of security remains paramount within any connected system. This observation is all the more valid in the IoT, especially with applications involving the exchange of sensitive data, such as in the health sector. There are indeed many risks. An ill-intentioned user could intercept data during transmission, or send false information to connected objects, thus inducing wrong instructions, with potentially disastrous consequences.

Read more on I’MTech: The IoT needs dedicated security – now

On the Internet, methods are already in place to limit these threats. The most common is end-to-end data encryption. Its purpose is to make information unreadable while it is being transported, since the content can only be deciphered by the sender and receiver of the message.

Three contradictory requirements?

Unfortunately, each of the three characteristics can influence the others. For example, by multiplying the number of possible interlocutors, interoperability raises more security issues. But it also affects energy consumption. “Today, the Internet is a model of interoperability,” explains Laurent Toutain. For this, it is necessary to send a large amount of information each time, with a high degree of redundancy. It offers remarkable flexibility, but it also takes up a lot of space.” This is only a minor disadvantage for a broadband network, but not for the IoT, which is constrained in its energy consumption.

Similarly, if you want to have a secure system, there are two main possibilities. The first is to close it off from the rest of the ecosystem, in order to reduce risks, which radically limits interoperability.

The second is to implement security measures, such as end-to-end encryption, which results in more data being sent, and therefore increased energy consumption.

Reducing the amount of data sent, without compromising security

For about seven years, Laurent Toutain and his teams have been working to reconcile these different constraints, in the context of the IoT. “The idea is to build on what makes the current Internet so successful and adapt it to the constrained environments, says the researcher. We are therefore taking up the principles of the encryption methods and protocols used today, such as HTTP, but taking into account the specific requirements of the IoT”.

The research team has developed a compression mechanism named SCHC (Static Context Header Compression, pronounced “chic”). It aims to improve the efficiency of encryption solutions and provide interoperability in low-power networks.

For this purpose, SCHC works on the headers of the usual Internet protocols (IP, UDP and CoAP), which contain various details: source address, destination address, location of the data to be read, etc. The particularity of this method is that it takes advantage of the specificity of the IoT: a simple connected object, such as a sensor, has far fewer functions than a smartphone. It is then possible to anticipate the type of data sent. “We can thus free ourselves from the redundancy of classic exchanges on the web, says Laurent Toutain. We then lose flexibility, which could be inconvenient for standard Internet use, but not for a sensor, which is limited in its applications”.

In this way, the team at IMT Atlantique has achieved significant results. It has managed to reduce the size of the headers traditionally sent, weighing 70-80 bytes, to only 2 bytes, and to 10 bytes in their encrypted version. “A quantity that is perfectly acceptable for a connected object and compatible with network architectures that consume very little energy,” concludes the researcher.

A protocol approved by the IETF

But what about that precious interoperability? With this objective, the authors of the study approached the IETF (Internet Engineering Task Force), the international organization for Internet standards. The collaboration has paid off, as SCHC has been approved by the organization and now serves as the global standard for compression. This recognition is essential, but is only a first step towards effective interoperability. How can we now make sure that manufacturers really integrate the protocol into their connected objects? For this, Laurent Toutain has partnered with Alexander Pelov, also a researcher at IMT Atlantic, in order to found the start-up company Acklio. The company works directly with industrialists and offers them solutions to integrate SCHC in their products. It thus intends to accelerate the democratization of the protocol, an effort supported in particular by  €2 million in funds raised at the end of 2019.

Read more on I’MTech Acklio: linking connected objects to the Internet

Nevertheless, industrialists remain to be convinced that the use of a standard is also in their interest. To this end, Acklio also aims to position SCHC among the protocols used within 5G. To achieve this, it will have to prove itself with the 3GPP (3rd Generation Partnership Project) which brings together the world’s leading telecommunications standards bodies. “A much more constraining process than that of the IETF,” however, warns Laurent Toutain.

Bastien Contreras

Eclairer boites noires, algorithms

Shedding some light on black box algorithms

In recent decades, algorithms have become increasingly complex, particularly through the introduction of deep learning architectures. This has gone hand in hand with increasing difficulty in explaining their internal functioning, which has become an important issue, both legally and socially. Winston Maxwell, legal researcher, and Florence d’Alché-Buc, researcher in machine learning, who both work for Télécom Paris, describe the current challenges involved in the explainability of algorithms.

What skills are required to tackle the problem of algorithm explainability?

Winston Maxwell: In order to know how to explain algorithms, we must draw on different disciplines. Our multi-disciplinary team, AI Operational Ethics, focuses not only on mathematical, statistical and computational aspects, but also on sociological, economic and legal aspects. For example, we are working on an explainability system for image recognition algorithms used, among other things, for facial recognition in airports. Our work therefore encompasses these different disciplines.

Why are algorithms often difficult to understand?

Florence d’Alché-Buc: Initially, artificial intelligence used mainly symbolic approaches, i.e., it simulated the logic of human reasoning. Logical rules, called expert systems, allowed artificial intelligence to make a decision by exploiting observed facts. This symbolic framework made AI more easily explainable. Since the early 1990s, AI has increasingly relied on statistical learning, such as decision trees or neural networks, as these structures allow for better performance, learning flexibility and robustness.

This type of learning is based on statistical regularities and it is the machine that establishes the rules which allow their exploitation. The human provides input functions and an expected output, and the rest is determined by the machine. A neural network is a composition of functions. Even if we can understand the functions that compose it, their accumulation quickly becomes complex. So a black box is then created, in which it is difficult to know what the machine is calculating.

How can artificial intelligence be made more explainable?

FAB: Current research focuses on two main approaches. There is explainability by design where, for any new constitution of an algorithm, explanatory output functions are implemented which make it possible to progressively describe the steps carried out by the neural network. However, this is costly and impacts the performance of the algorithm, which is why it is not yet very widespread. In general, and this is the other approach, when an existing algorithm needs to be explained, it is an a posteriori approach that is taken, i.e., after an AI has established its calculation functions, we will try to dissect the different stages of its reasoning. For this there are several methods, which generally seek to break the entire complex model down into a set of local models that are less complicated to deal with individually.

Why do algorithms need to be explained?

WM: There are two main reasons why the law stipulates that there is a need for the explainability of algorithms. Firstly, individuals have the right to understand and to challenge an algorithmic decision. Secondly, it must be guaranteed that a supervisory institution such as the  French Data Protection Authority (CNIL), or a court, can understand the operation of the algorithm, both as a whole and in a particular case, for example to make sure that there is no racial discrimination. There is therefore an individual aspect and an institutional aspect.

Does the format of the explanations need to be adapted to each case?

WM: The formats depend on the entity to which it needs to be explained: for example, some formats will be adapted to regulators such as the CNIL, others to experts and yet others to citizens. In 2015, an experimental service to deploy algorithms that detect possible terrorist activities in case of serious threats was introduced. For this to be properly regulated, an external control of the results must be easy to carry out, and therefore the algorithm must be sufficiently transparent and explainable.

Are there any particular difficulties in providing appropriate explanations?

WM: There are several things to bear in mind. For example, information fatigue: when the same explanation is provided systematically, humans will tend to ignore it. It is therefore important to use varying formats when presenting information. Studies have also shown that humans tend to follow a decision given by an algorithm without questioning it. This can be explained in particular by the fact that humans will consider from the outset that the algorithm is statistically wrong less often than themselves. This is what we call automation bias. This is why we want to provide explanations that allow the human agent to understand and take into consideration the context and the limits of algorithms. It is a real challenge to use algorithms to make humans more informed in their decisions, and not the other way around. Algorithms should be a decision aid, not a substitute for human beings.

What are the obstacles associated with the explainability of AI?

FAB: One aspect to be considered when we want to explain an algorithm is cyber security. We must be wary of the potential exploitation of explanations by hackers. There is therefore a triple balance to be found in the development of algorithms: performance, explainability and security.

Is this also an issue of industrial property protection?

WM: Yes, there is also the aspect of protecting business secrets: some developers may be reluctant to discuss their algorithms for fear of being copied. Another counterpart to this is the manipulation of scores: if individuals understand how a ranking algorithm, such as Google’s, works, then it would be possible for them to manipulate their position in the ranking. Manipulation is an important issue not only for search engines, but also for fraud or cyber-attack detection algorithms.

How do you think AI should evolve?

FAB: There are many issues associated with AI. In the coming decades, we will have to move away from the single objective of algorithm performance to multiple additional objectives such as explainability, but also equitability and reliability. All of these objectives will redefine machine learning. Algorithms have spread rapidly and have enormous effects on the evolution of society, but they are very rarely accompanied by instructions for their use. A set of adapted explanations must go hand in hand with their implementation in order to be able to control their place in society.

By Antonin Counillon

Also read on I’MTech: Restricting algorithms to limit their powers of discrimination

 

La Ruche à vélos, bicycle parking

La Ruche à Vélos is developing secure bicycle parking throughout France

Innovative and appropriate parking solutions must be created for the long-term development of cycling. The La Ruche à Vélos start-up incubated at IMT Atlantique offers an automated, secure and easy-to-use parking facility. This modular concept is connected to a mobile application and is intended for all users via acquisition by local authorities. For this solution, La Ruche à Vélos won the 2020 Bercy-IMT Innovation Award on February 2nd.

In 2020, many French people got back on their bikes. In its annual report published last October, the Vélo & Territoires association reported an average increase in bicycle use of 9% between January and September 2020 (compared to 2019) [1]. In a year strongly marked by strikes and the health crisis, exceptional circumstances strongly supported this trend. The attraction for bicycles shows no signs of slowing down. While local authorities support these practices, they also raise new issues in terms of security and parking. How many cyclists have already found their bike without a saddle, without a wheel, or perhaps not found their bike at all? To meet these challenges, the start-up La Ruche à Vélos, incubated at IMT Atlantique, proposes an innovative secure bicycle storage solution.

Automatic and secure parking

The increase in the number of cyclists is due in part to the emergence of electric bicycles. These bikes are heavier, bulkier and require a significant financial investment by their users. They therefore pose new constraints and require more security when parking. La Ruche à Vélos has developed a product that meets these expectations. Their solution consists of a secure bicycle parking facility which is connected to a mobile application. Its three founders were particularly attached to its ease of use. “It takes between 20 and 30 seconds to drop off or pick up a bike,” says Antoine Cochou, co-creator of the start-up. But how does it work?

The application allows the user to geolocate a parking facility with available spaces and to reserve one in advance. After identifying themselves on site, cyclists have access to a chamber, and deposit their bike on a platform before validating. There are also compartments available allowing users to recharge their batteries. Inside the parking facility, a machine stores the bike automatically. The facility covers several floors, thus saving ground space and facilitating integration of the system into the urban landscape. It can hold about 50 bikes over 24 square meters, dividing the bicycle parking space otherwise required on sidewalks by four! In addition, the size of the parking facility is flexible. The number of spaces therefore varies according to the order.

In June 2021, a first prototype of about ten spaces will be installed in the city of Angers. The young innovators hope to collect enough feedback from users to improve their next product. Two more facilities are planned for the year. They will have 62 to 64 spaces. “Depending on the location, a balance must be struck between user waiting time and the demand for services. These two parameters are related to the number of sites and the flow of users at peak times (train station, shops, etc.),” says Antoine Cochou.

Strategic locations with adapted subscriptions

La Ruche à Vélos is aimed directly at local authorities who can integrate this solution into their mobility program. It also targets businesses and real estate developers wishing to offer an additional service to their employees or future residents. Depending on the needs, the parking facilities could therefore be installed in different strategic locations. “Local authorities are currently focusing on railway stations and city centers, but office or residential areas are also being considered,” says Antoine Cochou. Each zone has its own target and therefore its own form of subscription. In other words, one-off parking in the city, daytime offers for offices, and evening and weekend passes for residents.

Initially, subscriptions for the prototype installed in Angers will be managed by the start-up. However, future models are expected to couple parking passes with local public transit passes. Subscriptions will thus be taken care of by the cities. The start-up will focus on maintenance support. “In this sense, our next models will be equipped with cameras and it will be possible to control them remotely,” says Maël Beyssat, co-creator of La Ruche à Vélos. Communities will have a web interface to monitor the condition and operating status of the parking facility (rate of use, breakdowns, availability, etc.)

For the future, the company is considering the installation of solar panels to offer a zero-electricity consumption solution. Finally, other locations could be considered outside of popular touring sites on cycle routes.

[1] Result obtained with the help of sensors measuring the number of bikes going past.

By Anaïs Culot

Corenstock Chair: a Trial Cylinder for the Heating Industry

As 2021 begins, IMT and elm.leblanc launched the Corenstock Industrial Chair to address issues in energy and digital transition in the domestic heating industry. What is the objective? Within four years, to present a demonstrator for the hot water tank of the future: more resistant, efficient and durable. Behind this prototype lies the development of new economic models for the global transformation of the industry.

The principle of Corenstock Chair (Lifecycle design & systemic approach for energy efficiency of water heating and storage devices), launched in early 2021, is to consider an equipment of the everyday-life to be optimized and used as a model for the transformation of an entire industry. The objective is to present within four years a demonstrator for an innovative hot water tank, more energy efficient, more sustainable and more connected to its users. The project is however not limited to the design of a new domestic hot water tank: it covers the transition problematics of the heating industry as a whole. In line with new business models development, the underlying interest is to redefine the dedicated design methodologies, to generalize sustainable production and end-of-life recovery to implement new economic balances.

The Chair lead by IMT is co-funded in equal parts by the ANR (French Research National Agency) and elm.leblanc, a company specialized in the production of water heaters and boilers. The project relies on the complementary skills of the academic and industrial partners. “We are exploring two key avenues: on the one hand, technological innovation, involving design, materials and smart controls issues” says Mylène Lagardère, a researcher at IMT Lille Douai. She holds the Corenstock Chair, which is jointly coordinated with Xavier Boucher, a researcher at Mines Saint-Etienne. He is responsible for the operational management of the Chair and adds:  “on the other hand, we are working on innovation capabilities, decision-making support for new design methods and the transformation of the production chain together with the company organization”. The two researchers mention that they have “established a trusting and long-term partnership with elm.leblanc, with the goal of pursuing future projects in this area”.

What would be the tank of the future?

The goal is to improve the energy efficiency of a product that everyone owns at home,” says Mylène Lagardère. Moreover, this equipment is crucial for various thermal systems; whether gas, oil or electricity is used as a source of energy, all of us need to store domestic hot water. To find ways to improve thermal performances, or to select materials to make the cylinder as efficient as possible, involves a significant amount and diversity of research actions. The Chair will thus benefit from the recruiting of 5 PhD students, 4 post-docs and 3 engineers.

Product durability is one of the main areas for improvement. In this sense, predictive maintenance is promising. The use of smart sensors is essential, both to better evaluate the tank performances and to foresee necessary repairs before it breaks down. Mylène Lagardère specifies that the objective is to have “the best compromise between each component, each function of the tank, while taking into account its integration in the environment and the management of the end-of-use”.

Behind the project’s targeted product, general reflection on the entire product life cycle is emerging and address the resources needed for its production, the product durability or the management valorization at the end of use. The project advancements on the improvement of the value chain are expected to be generalized to the entire industry :“The work conducted on the hot water storage tank tank is the entry point for more general work on the economic model itself,” says Xavier Boucher, and these questions are completely integrated in the Corenstock Chair program.

Evolution of the industry

Xavier Boucher emphasizes that “these hot water storage tanks are at the heart of a variable system and a transformation of this sector involves industrial actors at different levels, including the customers as well the maintenance providers”. As a result, the relations to the customers will naturally be modified. The two researchers mention: “this is part of a fairly strong phase of transition in the industries business. It is no longer simply a matter of selling a hot water storage tank, but of including the tank in a multi-actor performance contract.”

From the point of view of companies, they now need to develop customer loyalty and sustainability. “These different levers are necessary to establish a win-win relationship between the customer and the manufacturer,” says Xavier Boucher. Intelligent management offers opportunities to improve energy costs, reduce maintenance costs, and ultimately reduce the final energetic bill. This also reduces for manufacturing and maintenance internal costs.

Mylène Lagardère reports that they aim “to enlighten decision-makers on their economic transformation, particularly through the research for more sustainable indicators”. Her colleague from Saint-Etienne adds “virtualization proves to be a key tool in planning this transition”. The Corenstock Chair assumes the role of simulator of this transformation by observing the behavior of users and various partners. The project combines several routes of innovation, whether aiming towards digital, networking or what is known as digital servicing. This is a strategy converging towards a long-term customer relationship through digital services. “The challenge lies in the evolution of value creation mechanisms,” says Mylène Lagardère.

The Chair is also driven by the dissemination of the results generated and knowledge acquired towards students and future engineers in the field, but also towards the technical and innovation staff of elm.leblanc through professional trainings. Xavier Boucher notes “there are two aspects of training: short modules to increase professional skills, and a specialized master’s degree to integrate more largely the solutions into the industrial framework.” One of the objectives of the specialized master’s degree is to mutualize the skills of each school to encourage interaction between the different expertise domains required.

Generally speaking, the Chair cannot be simply reduced to technological innovation. On the contrary it covers a global reflection on what the industry of the future is” says Xavier Boucher. This includes facilitating collaboration and opening among different sectors: industrial, technological, and economic. This collaboration is essential to ensure that these transformations are a lasting part of tomorrow’s industry. “The Chair marks what elm.leblanc is building with IMT: a new way of approaching these innovation processes, through a strong collaboration and a relationship of trust to increase the capacity for innovation,” concludes Xavier Boucher.

Tiphaine Claveau.