Fonds IMT numérique

AlertSmartCity, Cook-e, Dastra, DMS, GoodFloow, JobRepublik, PlaceMeet and Spectronite supported by the “honor loan” scheme

The members of the IMT Digital Fund-IGEU, IMT and Fondation Mines-Télécom held a meeting on 23 February. On this occasion, 8 start-ups from the incubators of IMT Mines Albi, IMT Atlantique, IMT Lille Douai, Télécom Paris, Mines Saint-Étienne, Télécom SudParis and Institut Mines-Télécom Business School were awarded 18 honor loans (interest-free) for a total of €340,000.

L’attribut alt de cette image est vide, son nom de fichier est logo_AlertSmartCity-1.jpg.

AlertSmartCity (the incubator at IMT Mines Albi) wishesto create an interoperable alert management platform, to be used in the event of a major risk (natural, industrial, health or terrorist disaster). This platform will allow municipalities to send qualified and geolocalized alerts to their public institutions (schools, cultural, sports, hospitals, administrations and other palaces receiving the public) using dedicated communication terminals that are resilient to network outages and are interactive (bi-directional communication). These reception terminals will allow disaster victims to report back to the crisis unit.
Two honor loans of €20,000 each.

L’attribut alt de cette image est vide, son nom de fichier est logo_Cook-e.png.

Cook-e (Télécom Paris Novation Center) proposes a multi-function connected robot for restaurant kitchens. The restaurant owner enters a recipe into the robot software and then loads the ingredient tanks. These tanks can be stored cool, dry or warm. The robot then prepares the recipe: it measures out, cuts, cooks, mixes and cleans itself automatically. It can prepare all dishes with mixed ingredients in small pieces: pasta with sauce, salads, bowls, rice, meat and fish in small pieces, vegetable side dishes, etc.
One honor loan of €20,000 and two honor loans of €10,000. Find out more

L’attribut alt de cette image est vide, son nom de fichier est logo_Dastra_300x85.jpg.

Dastra (IMT Starter) is the simple, guided data governance solution that enables data protection professionals to meet the requirements of the GDPR, save time, and develop a company data culture. One small step for DPOs, one giant leap for data protection!
Two honor loans of €8,000 and two honor loans of €12,000. Find out more

L’attribut alt de cette image est vide, son nom de fichier est logo_DMS-logistics.jpg.

DMS (the incubator at Mines Saint-Etienne) is an AI platform for managing and anticipating container flows, allowing for the fluidity of port and land container traffic. It connects all the players in the container port logistics chain (shipowners/terminals) with those located inland (carriers/depots).
Three honor loans of €20,000 each. Find out more

L’attribut alt de cette image est vide, son nom de fichier est logo_Goodflow.jpg.

GoodFloow (the IMT Lille Douai incubator) automates the tracking and management of reusable packaging. Their service consists of using IoT in individual packaging along with a web/mobile app. This solution eliminates asset management and change management issues related to packaging, makes flows more reliable, and enables a sustainable transition in logistics.
One honor loan of €40,000. Find out more

L’attribut alt de cette image est vide, son nom de fichier est logo_Jobrepublik.jpg.

JobRepublik (IMT Starter) is the meeting point between companies in need of temporary workers and anyone looking for additional income. The start-up offers the first open marketplace dedicated to “blue collar” freelancers that allows a direct relationship between 700,000 small businesses in the logistics, retail and restaurant sectors and 3 million independent workers.
Two honor loans of €20,000 each. Find out more

L’attribut alt de cette image est vide, son nom de fichier est logo_placemeet.jpg.

Placemeet (incubator at IMT Atlantique) is a simple and intuitive platform optimized for engagement and interaction. Attendees can move between rooms as if it were a physical event and enjoy an exceptional experience from anywhere in the world.
Two honor loans of €20,000 each. Find out more

L’attribut alt de cette image est vide, son nom de fichier est logo_Spectronite.png.

Spectronite (Télécom Paris Novation Center) has developed a breakthrough technology, with the implementation of an architecture based on Software Defined Radio, which can offer speeds up to 10 Gbps over very long distances, i.e. up to 20x the speed offered by traditional products. Spectronite offers a disruptive innovation for mobile operators, enabling them to deploy 4G and soon 5G, even in territories where fiber is not available.
One honor loan of €10,000 and one honor loan of €30,000. Find out more

The honor loan program

Created in late 2011 under the aegis of the Grandes Écoles and Universities Initiative (IGEU) association, the IMT Digital Fund for honor loans is co-financed by the Fondation Mines-TélécomBPI France and Revital’Emploi.

What do we know about the environmental impact of biocomposites?

Bio-based materials are an alternative to those derived from fossil resources. They have been increasingly promoted in recent years. However, given the recent development of this sector, their real environmental impact is still relatively unknown. Joana Beigbeder, researcher in environmental analysis at IMT Mines Alès, provides an update on the life cycle analysis (LCA) of these emerging materials.

Although bio-based materials are presented as an effective alternative to their petroleum-based predecessors, the details of their environmental impact are still not well known. Plant-based materials are certainly an improvement for some impacts, such as carbon footprint, but it can be trickier when it comes to soil use or soil pollution, for example. To get to the bottom of this, life cycle analysis is an indispensable tool.

Joana Beigbeder, researcher in environmental impacts at IMT Mines Alès, is working on the life cycle analysis of these new bio-based materials, particularly biocomposites. The objective is to compare the environmental impact of different scenarios, either between bio-based and petroleum-based materials, or according to the different stages of the life cycle of the same material. The researcher says, “the focus of life cycle analysis is to study several environmental impacts in parallel and to identify possible transfers of pollution throughout the life cycle of a product”.

Different sources of impact

With regard to environmental impact, some points seem obvious. If we only look at carbon footprint, it seems obvious that bio-based materials are more beneficial than their petroleum-based cousins. But the reality is more complex because there are many variables to consider. “Carbon impact is a well-known indicator, but we must also take into account the different impacts of pollution, human toxicity, soil or water pollution or global warming”, says Joana Beigbeder. Take the automotive sector, for example. If for the same function, a part made of bio-based material has a higher weight, it will require more energy to use and will not necessarily be beneficial. “It may be a better solution from an environmental point of view, but that’s precisely what the LCA will tell us,” she says.

Some points seem more obvious, such as the consumption of fossil resources and the impact on global warming in general. Plant materials absorb CO2 during their growth, which gives them an undeniable advantage. That said, their growth implies an agricultural impact, especially on land use. “Depending on the type of crop, this can lead to soil pollution. Generally, the more fertilizers and additives that are required for growth, the less beneficial some of the impacts will be,” says Joana Beigbeder. This brings up a new issue that does not exist with petroleum-based materials.

“It’s always a case of compromising, and we look for ways to improve the process or the product,” says the researcher. “Plants, such as flax or hemp, require little or no input and are therefore a preferred option. But the material is only one area for improvement when it comes to eco-design”, she notes. To reduce the environmental impact of a product, lightening its weight and extending its lifespan are key points. This can include the reparability of the product, or even new concepts such as prioritizing the use of the product over the purchase of the product.

“There is also a question of ecological common sense, if we study a disposable product, reusable or compostable materials will be favored,” says Joana Beigbeder. One research topic includes the development of compostable diapers and band-aids, a promising step in reducing plastic pollution.

Predicting potential impacts

“Life cycle analysis study is really done on a case-by-case basis depending on the service provided by a product,” says Joana Beigbeder. To estimate the consumption of solvents or the chemical synthesis of materials, the researcher uses previous scientific publications and mathematical models. But in some areas, data is missing. “We sometimes have to use fairly strong approximations on certain impacts, but this still allows us to map trends and areas for improvement,” she maintains. This can be the case for the production of plant fibers, as well as for their disposal at the end of their use.

As these materials do not yet have a major presence on the market, there are currently no channels for processing the resulting waste. In fact, there is no data on the environmental impact of the incineration, landfill, or pollution from these newly emerging materials. “To estimate these impacts, we assimilate them to other, better-known closely-related materials on which we have data, to develop our hypotheses,” she explains. This work also generates new data for LCA such as recycling simulations for end-of-life treatment.

“Some bio-based materials are at the laboratory stage today, and it’s still hard to imagine what will happen at the industrial stage,” says Joana Beigbeder. The researcher then works on predictions: she imagines scenarios for materials that are not yet on the market and analyzes whether this has any impact on an environmental level. Also, given the recent nature of the bio-materials sector, their production will be less optimized than that of traditional petroleum-based materials which have evolved and improved. “This works against the newer materials, as they will need to go through several developmental stages,” she points out.

This research is essential for the development of sustainable alternatives to conventional materials and is part of an overall vision to meet the challenges of plastic pollution. “A lot of research is focused on the development of new materials with a smaller environmental impact. The VOICE project, for example, launched in 2018 in partnership with Innovation Plasturgie Composites, Silvadec, IMT Mines Alès and funded by ADEME, focuses on the recycling of biocomposite decking. Or the Agrobranche project, which brings together 8 French partners[1], and focuses on, among other things, the study of bio-based reinforcements from agroforestry,” she concludes. These two projects reflect the blossoming of new emerging materials and the current development of research to find sustainable alternatives to the plastic problem.

[1] Scop Agroof (coordinator), IMT Mines Alès, CIRAD BiowooEB, LERMAB, FRD, CRA Bretagne, CIRAD AMAP, INRA SPO

Tiphaine Claveau

El Niño

El Niño: communities in the face of weather’s bad boy

South America must regularly face a climate event with far-ranging effects:  El Niño, which leads to localized flooding. This type of catastrophe also results in changes in the behavior of local communities – a topic which has been little studied. Yet these changes provide a good example of individuals’ resilience to crises. By studying consumption habits in the regions affected by El Niño, Vincent Gauthier, a researcher at Télécom SudParis, seeks to understand how communities react to this event.

El Niño is a recurring climate event, which takes place every two to seven years on the equatorial Pacific coast of South America. It leads to localized heavy precipitation with serious consequences. “The  2017 El Niño phenomenon was especially violent and was characterized by two periods of heavy rainfall, resulting in human casualties and extensive destruction of physical structures,” says Vincent Gauthier, a researcher at Télécom SudParis who studies complex networks and is analyzing the impact of the 2017 episode on the behavior of the local community.  

Peru was strongly impacted by the most recent El Niño phenomenon, especially in the north of the country and on its Pacific coast, which includes the Lima region. The episodes of rainfall gave rise to two periods of flooding: the first in February and the second in early April. Vincent Gauthier’s research seeks to understand how economic behavior changes before, during and after these periods.

To study these changes, the researcher uses data about consumption in the region. “Our approach is to analyze banking transaction data, with different granularity levels,” he explains. Studies were carried out in partnership with the Pacific University in Lima and led to the publication of a research article in the journal Plos One.

At the countrywide level, the results are conclusive: during each period of heavy rainfall there is a significant drop in the number and volume of transactions overall, therefore indicating that individuals consume less during the weather event. Transactions return to normal in the days following the rainfall, indicating that the overall impact is fairly limited in duration.   

Resilience to El Niño

The study was then carried out in a specific way in the region of Lima, which includes the capital and surrounding rural areas. This made it possible to categorize areas according to dynamic changes in consumption.  Unsurprisingly, the areas recording the most significant drops in transactions were the most affected by the rainfall. However, certain areas recorded rises in consumption before and during the episode, a behavior which may reflect a trend of purchasing as a precautionary measure.

To better understand such variations, Vincent Gauthier established a retail network model. This representation indicates not only consumers’ purchases, but also consumption paths. Such a model shows the various connections between stores, based on how busy they are, their ranking and the type of products sold. For example, a consumer who carries out a transaction at a pharmacy and then another at a supermarket strengthens the link between these two types of stores within the network. This makes it possible to study which links are the strongest in the event of a disturbance.  

During periods of heavy rainfall, the size of the network was strongly impacted,” says the researcher. “The connections were therefore reduced to stores that sell food, medical supplies and fuel. These connections represent the core of the network and if this core collapses, so does the whole system,” explains Vincent Gauthier. Modeling and studying resilience therefore allow us to understand the vulnerability and risks to this core network.

Using this approach, it can be seen that the first episode of rainfall had a stronger impact than the second one on the size of the core network, as well as on the time to it took to rebuild a wider network. Yet, the second period of rainfall was more violent from a weather perspective. This counterintuitive observation may be explained by better community preparedness for the second period of heavy rainfall and flooding. This difference in behavior highlighted by modeling is a marker of the resilience of the Peruvian people.

Understanding people through their purchases

To put these models in place, researchers used all the metadata associated with banking transactions. “Each transaction produces data accompanied by nomenclatures, which contain information about the type of store in which it was carried out, for example supermarkets, restaurants, pharmacies or service stations, “ says Vincent Gauthier. “This nomenclature also contains the date of purchase date and the anonymized identity of the person who made the purchase,” he continues.

This means that each individual’s purchasing path can be traced over time to provide an overview of his or her specific economic behavior during various periods. This analysis makes it possible to determine which stores are most often visited after one another by consumers, which is influenced both by the geographical proximity of the businesses to one another and similar interests among consumers.

By analyzing this data, stores can be ranked according to the number and volume of transactions carried out there, then divergence measurements can be taken to identify changes in these rankings,” explains the researcher. The divergence measurements focus on differences in stores’ rankings at the time of the El Niño phenomenon compared to the original distribution. Such differences can also be seen during festive events, when there is a greater number of transactions  in certain kinds of stores. “We therefore categorized stores based on the variation in their ranking during the El Nino phenomenon,” says Vincent Gauthier.

This approach allows researchers to create a profile of various stores over time so that they could  see how their ranking varies at the time of events. For example, the ranking of restaurants fell sharply during the short periods corresponding to times of heavy rainfall, while the ranking of stores selling medical supplies increased for a relatively long period of time. Supermarkets were the type of store whose rankings were generally the most stable.

Better preparing for crises

Future climate change will lead to an increase in extreme phenomena. Community resilience to these events will become an important issue to understand,” says Vincent Gauthier. The research carried out in relation to El Niño offers insights into community preparedness. It provides valuable knowledge for regions who are not used to dealing with extreme climate events, but who may have to face them in the years to come.  

That would make it possible to identify what services to develop and logistics to put in place in order to mitigate the effects of future crises, by organizing supply and inventory as well as keeping essential services open during crises. For example, we observed serious gasoline supply problems, although the demand for this product was high during the crisis and in its aftermath, and significant  delays in consumption in geographic areas that were less exposed to the crisis,” says the researcher.

Beyond the climate issue, the wider issue of preparedness and resilience to crisis was studied. Understanding how the consumption network varies, what parts must be strengthened, or on the other hand, what parts are secondary, makes it possible to better focus efforts in an exceptional situation. The study of the current health crisis is a part of this work. “We’re studying the effects of the Covid-19 pandemic on the behavior of the Peruvian people, by analyzing consumption data as well as mobility data.”  The analysis of mobility patterns could have a major impact on decisions to make in the event of a lockdown. “The methodology for the Covid-19 health crisis will be a bit different since the impact will be measured over a longer term unlike the crisis caused by El Niño where the underlying processes were essentially transitional,”  concludes Vincent Gauthier.

Antonin Counillon

Start-up, prêts d'honneur

Pam Tim, Examin, Cylensee and Possible supported through honor loan program

The members of the IMT Digital Fund, IGEU, IMT and Fondation Mines-Télécom met on 6 April. On this occasion, four start-ups developed through incubators at IMT Atlantique, Télécom Paris, Télécom SudParis and Institut Mines-Télécom Business School obtained 8 honor loans for a total of €160,000.

L’attribut alt de cette image est vide, son nom de fichier est Logo_Cylensee.png.

Cylensee (IMT Atlantique incubator) develops and produces connected electrochromic contact lenses for  the general public. These contact lenses have a feature that allows users to change the color of their iris almost instantly at their convenience. Activated by a remote control or via a smartphone, these lenses allow users to change their eye color with just one click, whether to stand out from the crowd, try out a new look, make an impression or just for fun.
• Two €20,000 honor loans • 

L’attribut alt de cette image est vide, son nom de fichier est Logo_Examin.jpg.

The Examin platform (Télécom Paris Novation Center) is a regulatory and technical compliance management solution for companies with a focus on cybersecurity and data protection. Using a collaborative and scalable workspace, customers benefit from continuous reporting on their compliance or that of their suppliers and can easily involve employees in their actions to reduce compliance risks.
• Two €20,000 honor loans • 
Learn more

Pam Tim (Télécom Paris Novation Center) specializes in the well-being of children aged 3-6 by providing them with an opportunity to intuitively learn the spatial and temporal reference points that structure the day using a watch without numbers or hands! This life assistant for children relies on a patented display of combinations of pictograms (PhD thesis) depicting key moments throughout the day. This connected watch also gives parents peace of mind as it allows them to anticipate household or peripheral risks their children may encounter at any moment through a very low-power Bluetooth© geofencing solution.
• Two €20,000 honor loans • 
Learn more

L’attribut alt de cette image est vide, son nom de fichier est Logo_Possible.png.

Possible (IMT Starter, the Télécom SudParis et IMT-BS incubator) is a project that encourages circular, environmentally-friendly, zero-waste, ethical fashion. Possible is a BtoC platform for renting clothes and accessories based on a monthly subscription. The subscription allows users to rent a selection of several pieces by brands that promote ethical and responsible practices, for a set cost. This project responds to the issue: How can individuals enjoy an unlimited wardrobe on a limited budget and in an environmentally-friendly way?
• Two €20,000 honor loans • 
Learn more

digital simulation

Digital simulation: applications, from medicine to energy

At Mines Saint-Étienne, Yann Gavet uses image simulation to study the characteristics of an object. This method is more economical in terms of time and cost, and eliminates the need for experimental measurements. This field, at the intersection of mathematics, computer science and algorithms, is used for a variety of applications ranging from the medical sector to the study of materials.

What do a human cornea and a fuel cell electrode have in common? Yann Gavet, a researcher in applied mathematics at Mines Saint-Étienne1 is able to model these two objects as 2D or 3D images in order to study their characteristics. To do this, he uses a method based on random fields. “This approach consists in generating a synthetic image representing a surface or a random volume, i.e. whose properties will vary from one point to another across the plane or space,” explains the researcher. In the case of a cornea, for example, this means visualizing an assembly of cells whose density differs according to whether we look at the center or the edge. The researcher’s objective? To create simulations with properties as close as possible to the reality.

Synthetic models and detecting corneal disorders

The density of cells that make up our cornea –the transparent part at the front of the eye– and its endothelium, provides information about its health. To perform these analyses, automatic cell detection and counting algorithms have been developed using deep neural networks. Training them thus requires access to large databases of corneas. The problem is that these do not exist in sufficient quantity. “However, we have shown that it is possible to perform the training process using synthetic images, i.e. simulated by models,” says Yann Gavet.

How does it work? Using deep learning, the researcher creates graphical simulations based on key criteria: size, shape, cell density or the number of neighboring cells. He is able to simulate cell arrangements, as well as complete and realistic images of corneas. However, he wants to combine the two. Indeed, this step is essential for the creation of image databases that will allow us to train the algorithms. He focuses in particular on the realism of the simulation results in terms of cell geometry, gray levels and the “natural” variability of the observations.

Although he demonstrated that training using synthetic corneal data does not require perfectly realistic representations to perform well, improving accuracy will be useful for other applications. “As a matter of fact, we transpose this method to the simulation of material arrangements that compose fuel cell electrodes, which requires more precision,” explains the researcher.

Simulating the impact of microstructures on the performance of a fuel cell

The microstructure of fuel cell electrodes impacts the performance and durability of solid oxide cells. In order to improve these parameters, researchers want to identify the ideal arrangement of the materials that make up the electrodes, i.e., how they should be distributed and organized. To do this, they play with the “basic” geometry of an electrode: its porosity and its material particle size distribution. This therefore targets the morphological parameters on which the manufacturers intervene when designing the electrodes.

To identify the best performing structures, one method would be to build and test a multitude of configurations. This is an expensive and time-consuming practice. The other approach is based on the simulation and optimization of a large number of configurations. Subsequently, a second group of models simulating the physics of a battery can in turn identify which structures best impact the battery’s performance.

The advantage of the simulations is that they target specific areas within the electrodes to better understand their operation and their overall impact on the battery. For example: exchange zones such as “triple phase” points where ionic, electronic and gaseous phases meet, or exchanges between material surfaces. “Our model allows us to evaluate the best configuration, but also to identify the associated manufacturing process that offers the best energy efficiency for the battery,” says Yann Gavet.

In the medium term, the researcher wishes to continue his work on a model whose dimensions are similar to the observations made in X-ray tomography. An algorithmic challenge that will require more computing time, but will also lead to results that are closer to the reality of the field.

1 Yann Gavet is a researcher at the Georges Friedel laboratory, UMR CNRS/Mines Saint-Étienne

Anaïs Culot

SONATA

SONATA: an approach to make data sound better

Telecommunications must transport data at an ever-faster pace to meet the needs of current technologies. But this data can be voluminous and difficult to transport at times. Communication channels are congested and transmission limits are reached quickly. Marios Kountouris, a telecommunications researcher at EURECOM, has recently received ERC funding to launch his SONATA project. It aims to shift the paradigm for processing information to speed up its transmission and make future networks more efficient.

We are close to the fundamental limit for transmitting data, from one point to another,” explains Marios Kountouris, a telecommunications researcher at EURECOM. Most of the current research in this discipline focuses on how to organize complex networks and on improving the algorithms that optimize these networks. Few projects, however, focus on improving the transfer of data between transmitters and receivers. This is precisely the focus of Marios Kountouris’ SONATA project, funded by a European ERC consolidator grant.

Telecommunications are generally based on Shannon’s information theory, which was established in the 1950s,” says the researcher. In this theory, a transmitter simply sends information through a transmission channel, which models it and transfers it to a receiver which then reconstructs it. The main obstacle to get around is the noise accompanying the signal when it passes through the transmission channel. This constraint can be overcome by algorithm-based signal processing and by increasing throughput. “This usually takes place in the same way, regardless of the message being transmitted. Back in the early days, and until recently, this was the right approach,” says the researcher.

Read more on I’MTech: Claude Shannon, a legacy transcending digital technology

Transmission speed for real-time communication

Today, there is an increasing amount of communication between machines that reason in milliseconds. “Certain messages must be transmitted quickly or they’re useless,” says Marios Kountouris. For example, in the development of autonomous cars, if the message collected relates to the detection of a pedestrian on the road so as to make the vehicle brake, it is only useful for a very short period of time. “This is what we call the age, or freshness of information, which is a very important parameter in some cases,” explains Marios Kountouris.

Yet, most transmission and reconstruction is slowed down by surplus information accompanying the message. In the previous example, if the system for detecting pedestrians is a camera that captures images with details about all the surrounding objects, a great deal of the information in the transmission and processing will not contribute to the system’s purpose. For the researcher, “the sampling, transmission and reconstruction of the message must no longer be carried out independently of another. If excess, redundant or useless data accompanies this process, there can be communication bottlenecks and security problems.”  

The semantics of messages

For real-time communication, the semantics of the message  — its meaning and usefulness— take on particular importance. Semantics make it possible to take into account the attributes of the message and adjust the format of its transmission depending on its purpose. For example, if a temperature sensor is meant to activate the heating system automatically when the room temperature is below 18° C, the attribute of the transmitted message is simply a binary breakdown of temperature: above or below 18°C.

Through the SONATA project, Marios Kountouris seeks to develop a new communication paradigm that takes the semantic value of information into account. This would make it possible to synchronize different types of information collected at the same time through various samples and make more optimal decisions. It would also significantly reduce the volume of transported data as well as the associated energy and resources required.

The success of this project depends on establishing semantic metrics that are concrete, informative and traceable,” explains the researcher. Establishing the semantics of a message means preprocessing sampling by the transmitter depending on how it is used by the receiver. The aim is therefore to identify the most important, meaningful or useful information in order to determine the qualifying attributes of the message. “Various semantic attributes can be taken into account to obtain a conformal representation of the information, but they must be determined in advance, and we have to be careful not to implement too many attributes at once,” he says.

The goal, then, is to build communication networks with key stages for processing the semantics associated with information. First, semantic filters must be used to avoid unnecessary redundancy when collecting information. Then, semantic preprocessing must be carried out in order to associate the data with its purposes. Signal reconstruction by the receiver would also be adapted to its purposes. All this would be semantically-controlled, making it possible to orchestrate the information collected in an agile way and reuse it efficiently, which is especially important when networks become more complex.

This is a new approach from a structural perspective and would help create links between communication theory, sampling and optimal decision-making. ERC consolidator grants fund high-risk, high-reward projects that aim to revolutionize a field, which is why SONATA has received this funding. “The sonata was the most sophisticated form of classical music and was pivotal to its development. I hope that SONATA will be a major step forward in telecommunications optimization,” concludes Marios Kountouris.

By Antonin Counillon

Reconnecting data to the sensory world of human beings: a challenge for industry 4.0 already taken up by SMEs

Gérard Dubey, Institut Mines-Télécom Business School and Anne-Cécile Lafeuillade, Conservatoire national des arts et métiers (CNAM)

Given the magnitude of uncertainty and risk of disruption threatening the economic and social order, the digitization of productive activities is often presented as a panacea.

Whether it’s a question of industrial production, creating new jobs or reclaiming lost productivity, the narrative supporting industry 4.0 focuses only on the seemingly infinite potential of digital solutions.

Companies that are considered to be active in the digital sector are upheld as trailblazers that will drive recovery. The Covid crisis has only accentuated this trend, which already appeared in the industry of the future programs.

Automated data captures downstream in the production process (with cameras, sensors, information extraction at each workstation) and their algorithmic processing upstream (big data, data science) hold the promise of “agile” management (precise, flexible, personalized) in real production time – something every industrial process strives for.

Nevertheless, this digital transformation seems to have forgotten two key facts: a company is first and foremost a group of human beings that cannot be reduced to numerical targets or abstract productivity criteria. And more importantly for industry, the relationship with the material is still a crucial dimension, which unites work teams and gives them meaning.

As such, there is something of a disconnect – which is only growing – between the stated ambitions of major industrial players and the realities on the ground.

The relationship with the material at industrial SMEs

From this perspective, although their role in (incremental) innovation is all too often overlooked and poorly understood, industrial SMEs have a lot to teach us. This is mainly due to the kind of specific relationships they continue to maintain with the material, if this is understood as a reality concerned as much with human aspects (motions, experiential knowledge, sense knowledge) as physical aspects (measurable). As they are rooted in their local communities and have withstood the test of time, they are accustomed to developing, arranging and organizing heterogeneous expertise and modes of intelligence about reality.

The surveys conducted in many industrial SMEs by a multidisciplinary research team show how important this relationship is to their directors. This can be seen in a number of aspects and affirms that their decisions are rooted in the reality of the situation.

When the CEO of Maison Caron digitized its site and moved to Saclay in 2019, she did not do away with the “old” coffee roaster from the 1950s. Coffee roasting may be rooted in reality and the senses, but the magic of aromas happens because the nose, eyes and even ears know how to control it – traditional know-how passed down through her family that she now shares with some employees of the company.

At Guilbert Express, another family business that makes high-end welding equipment, the director has observed a progressive loss of know-how in France, following the strategy to offshore export-oriented production in recent years. By going digital, he hopes to unite scattered work teams based on a shared, intercultural experience.

At Avignon Céramic, a company in Berry that makes ceramic cores for the aeronautical industry, quality comes down to daily interactions with the material. And this material – inherently unstable, unpredictable, a source of variability and uncertainty, almost “living” and virtually independent – in turn requires know-how that is itself living, precise and agile, to make a final product that is an acceptable part for the supply chain of major manufacturers.

In industry, human expertise makes it possible to better understand the material. Shutterstock

This is particularly apparent in Opé40, one of the key steps of the quality processes implemented to identify defects in the ceramic cores. This visual and tactile inspection identifies infinitesimal details and requires extensive expertise. But this step is also decisive in establishing collective knowledge and building a work team: while some employees are responsible for detecting defects, everyone works together to use these traces to discover the meaning, similar to a police investigation.

It is through this relationship with material that the work community is brought together. From this perspective, SMEs appear to possess what may be one of the best-kept industrial secrets: how human beings and material contribute to a shared transformation process.

While traceability and numerical data analysis systems play a growing role in the organization of work by companies seeking to harness this human expertise of the material – which is sometimes passed down through generations – the challenge is to integrate these transformations without giving up this culture.   

Humans – the key to adaptation

The director of AQLE, a company located in the North of France that specializes in electronic assemblies, raises questions about the risks posed by loss of meaning among employees if part of their activity is carried out by digital technology. To what extent is it possible to eliminate movements that are considered to be tedious without ultimately affecting the activity in its entirety – meaning developing, maintaining and acquiring expertise (training, learning, ways of passing it down)?

Similarly, the generational gap observed in the use of digital technology is often highlighted (in documents encouraging this transformation) to express the idea that younger employees could become mentors for older employees and act as intermediaries for the digital transformation of a company. But once again, the problem is more interesting and complicated than that.

Training only the oldest employees is not enough to ensure a successful digital transformation. Shutterstock

First of all, there is a need to develop new relationships and balance between the concrete (sensory, manual etc.) and digital world. From this perspective, the archaic/innovative dichotomy (often echoed in the cognitive/manual one) appears to be futile. It is the handing over of practices that matters, and not the “disruptive” approach, which more often than not results in approaches that are out of step with realities on the ground. The entire purpose of digital technology is precisely to urge us to question our forms of attachment to work.

One of the challenges of a successful “digital transition” will undoubtedly be to manage to combine or reconcile these different ways of acting on reality in a complementary manner – rather than through an either/or approach. It must be accepted in advance that the information obtained by one method or another is of a different nature. Digital processing of data cannot replace knowledge of the material, which relies on humans’ propensity to sense that which, like themselves, is living, fragile and impermanent.  

Humans’ familiarity with living material, far from being obsolete, may well be one of the keys to adapting to the upheavals taking place and those yet to come. The Covid crisis has shattered certainties and upended strategies. The time has come to remember that human expertise, and the collective memory on which it is founded, are not merely variables to be adjusted, but the very condition for agility, which is increasingly required in a globalized economy marked by uncertainty.  

Gérard Dubey, Sociologist, Institut Mines-Télécom Business School and Anne-Cécile Lafeuillade, PhD student in ergonomics, Conservatoire national des arts et métiers (CNAM)

This article has been republished from The Conversation under a Creative Commons license. Read the original article (in French).

Comprendre informations du langage, algorithms

Making algorithms understand what we are talking about

Human language contains different types of information. We understand it all unconsciously, but explaining it systematically is much more difficult. The same is true for machines. The NoRDF Project Chair “Modeling and Extracting Complex Information from Natural Language Text” seeks to solve this problem: how can we teach algorithms to model and extract complex information from language? Fabian Suchaneck and Chloé Clavel, both researchers at Telecom Paris, explain the approaches of this new project

What aspects of language are involved in making machines understand?

Fabian Suchaneck: We need to make them understand more complicated natural language texts. Current systems can understand simple statements. For example, the sentence: “A vaccine against Covid-19 has been developed” is simple enough to be understood by algorithms. On the other hand, they cannot understand sentences that go beyond a single statement, such as: “If the vaccine is distributed, the Covid-19 epidemic will end in 2021. In this case, the machine does not understand that the condition required for the Covid-19 epidemic to end in 2021 is that the vaccine is distributed. We also need to make machines understand what emotions and feelings are associated with language; this is Chloé Clavel’s specialist area.

What are the preferred approaches in making algorithms understand natural language?

FS: We are developing “neurosymbolic” approaches, which seek to combine symbolic approaches with deep learning approaches. Symbolic approaches use human-implemented logical rules that simulate human reasoning. For the type of data we process, it is fundamental to be able to interpret what has been understood by the machine afterwards. Deep learning is a type of automatic learning where the machine is able to learn by itself. This allows for greater flexibility in handling variable data and the ability to integrate more layers of reasoning.

Where does the data you analyze come from?

FS: We can collect data when humans interact with chatbots from a company and especially those from the project’s partner companies. We can extract data from comments on web pages, forums and social networks.

Chloé Clavel: We can also extract information about feelings, emotions, social attitudes, especially in dialogues between humans or humans with machines.

Read on I’MTech: Robots teaching assistants

What are the main difficulties for the machine in learning to process language?

CC: We have to create models that are robust in changing contexts and situations. For example, there may be language variability in the expression of feelings from one individual to another, meaning that the same feelings may be expressed in very different words depending on the person. There is also a variability of contexts to be taken into account. For example, when humans interact with a virtual agent, they will not behave in the same way as with a human, so it is difficult to compare data from these different sources of interactions. Yet, if we want to move towards more fluid and natural human-agent interactions, we must draw inspiration from the interactions between humans.

How do you know whether the machine is correctly analyzing the emotions associated with a statement?

CC: The majority of the methods we use are supervised. The data entered into the models are annotated in the most objective way possible by humans. The goal is to ask several annotators to annotate the emotion they perceive in a text, as the perception of an emotion can be very subjective. The model is then taught about the data for which a consensus among the annotators could be found. When testing the performance of the model, when we inject an annotated text into a model that has been trained with similar texts, we can see if the annotation it produces is close to those determined by humans.

Since the annotation of emotions is particularly subjective, it is important to determine how the model actually understood the emotions and feelings present in the text. There are many biases in the representativeness of the data that can interfere with the model and mislead us on the interpretation made by the machine. For example, if we assume that younger people are angrier than older people in our data and that these two categories do not express themselves in the same way, then it is possible that the model may end up simply detecting the age of the individuals and not the anger associated with the comments.

Is it possible that the algorithms end up adapting their speech according to perceived emotions?

CC: Research is being conducted on this aspect. Chatbots’ algorithms must be relevant in solving the problems they are asked to solve, but they must also be able to provide a socially relevant response (e.g. to the user’s frustration or dissatisfaction). These developments will improve a range of applications, from customer relations to educational or support robots.

What contemporary social issues are associated with the understanding of human language by machines?

FS: This would notably allow a better understanding of the perception of news on social media by humans, the functioning of fake news, and therefore in general which social group is sensitive to which type of discourse and why. The underlying reasons why different individuals adhere to different types of discourse are still poorly understood today. In addition to the emotional aspect, there are different ways of thinking that are built in argumentative bubbles that do not communicate with each other.

In order to be able to automate the understanding of human language and exploit the numerous data associated with it, it is therefore important to take as many dimensions into account as possible, such as the purely logical aspect of what is said in sentences and the analysis of the emotions and feelings that accompany them.

By Antonin Counillon

Digital innovations in health

Innovation in health: towards responsibility

Digital innovations are paving the way for more accurate predictive medicine and a more resilient healthcare system. In order to establish themselves on the market and reduce their potential negative effects, these technologies must be responsible. Christine Balagué, a researcher in digital ethics at Institut Mines-Télécom Business School, presents the risks associated with innovations in the health sector and ways to avoid them.

Until now, the company has approached technology development without looking at the environmental and social impacts of the digital innovations produced. The time has come to do something about this, especially when it comes to human lives in the health sector”, says Christine Balagué, a researcher at Institut Mines-Telecom Business School and co-holder of the Good in Tech Chair [1]. From databases and artificial intelligence for detecting and treating rare diseases, to connected objects for monitoring patients; the rapid emergence of tools for prediction, diagnosis and also business organization is making major changes in the healthcare sector. Similarly, the goal of a smarter hospital of the future is set to radically change the healthcare systems we know today. The focus is on building on medical knowledge, advancing medical research, and improving care.

However, for Christine Balagué, a distinction must be made between the notion of “tech for good” – which consists of developing systems for the benefit of society – and “good in tech”. She says “an innovation, however benevolent it may be, is not necessarily devoid of bias and negative effects. It’s important not to stop at the positive impacts but to also measure the potential negative effects in order to eliminate them.” The time has come for responsible innovation. In this sense, the Good in Tech chair, dedicated to responsibility and ethics in digital innovations and artificial intelligence, aims to measure the still underestimated environmental and societal impacts of technologies on various sectors, including health.

Digital innovations: what are the risks for healthcare systems?

In healthcare, it is clear: an algorithm that cannot be explained is unlikely to be commercialized, even if it is efficient. Indeed, the potential risks are too critical when human lives are at stake. However, a study published in 2019 in the journal Science on the use of commercial algorithms in the U.S. health care system demonstrated the presence of racial bias in the results of these tools. This discrimination between patients, or between different geographical areas, therefore gives rise to an initial risk of unequal access to care. “The more automated data processing becomes, the more inequalities are created,” says Christine Balagué. However, machine learning is increasingly being used in the solutions offered to healthcare professionals.

For example, French start-ups such as Aiintense, incubated at IMT Starter, and BrainTale use it for diagnostic purposes. Aiintense is developing decision support tools for all pathologies encountered in intensive care units. BrainTale is looking at the quantification of brain lesions. These two examples raise the question of possible discrimination by algorithms. “These cases are interesting because they are based on work carried out by researchers and have been recognized internationally by the scientific peer community, but they use deep learning models whose results are not entirely explainable. This therefore hinders their application by intensive care units, which need to understand how these algorithms work before making major decisions about patients,” says the researcher.

Furthermore, genome sequencing algorithms raise questions about the relationship between doctors and their patients. Indeed, the limitations of the algorithm, the presence of false positives or false negatives are rarely presented to patients. In some cases, this may lead to the implementation of unsuitable treatments or operations. It is also possible that an algorithm may be biased by the opinion of its designer. Finally, unconscious biases associated with the processing of data by humans can also lead to inequalities. Artificial intelligence in particular thus raises many ethical questions about its use in the healthcare setting.

What do we mean by a “responsible innovation”? It is not just a question of complying with data processing laws and improving the health care professional’s way of working. “We must go further. This is why we want to measure two criteria in new technologies: their environmental impact and their societal impact, distinguishing between the potential positive and negative effects for each. Innovations should then be developed according to predefined criteria aimed at limiting their negative effects,” says Christine Balagué.

Changing the way innovations are designed

Liability is not simply a layer of processing that can be added to an existing technology. Thinking about responsible innovation implies, on the contrary, changing the very manner in which innovations are designed. So how do we ensure they are responsible? Scientists are looking for precise indicators that could result in a “to do list” of criteria to be verified. This starts with the analysis of the data used for learning, but also by studying the interface developed for the users, through the architecture of the neural network that can potentially generate bias. On the other hand, existing environmental criteria must be refined by taking into account the design chain of a connected object and the energy consumption of the algorithms. “The criteria identified could be integrated into corporate social responsibility in order to measure changes over time,” says Christine Balagué.

In the framework of the Good In Tech chair, several research projects, including a thesis, are being carried out on our capacity to explain algorithms. Among them, Christine Balagué and Nesma Houmani (a researcher at Télécom SudParis) are interested in algorithms for electroencephalography (EEG) analysis. Their objective is to ensure that the tools use interfaces that can be explained to health care professionals, the future users of the system. “Our interviews show that explaining how an algorithm works to users is often something that designers aren’t interested in, and that making it explicit would be a source of change in the decision-making process,” says the researcher. The ability to explain and interpret results are therefore two key words guiding responsible innovation.

Ultimately, the researchers have identified four principles that an innovation in healthcare must follow. The first is anticipation in order to measure the potential benefits and risks upstream of the development phase. Then, a reflexive approach allows the designer to limit the negative effects and to integrate into the system itself an interface to explain how the technological innovation works to physicians. It must also be inclusive, i.e. reaching all patients throughout the territory. Finally, responsive innovation facilitates rapid adaptation to the changing context of healthcare systems. Christine Balagué concludes: “Our work shows that taking into account ethical criteria does not reduce the performance of algorithms. On the contrary, taking into account issues of responsibility helps to promote the acceptance of an innovation on the market”.

[1] The Chair is supported by the Institut Mines-Télécom Business School, the School of Management and Innovation at Sciences Po, and the Fondation du Risque, in partnership with Télécom Paris and Télécom SudParis.

Anaïs Culot

Also read on I’MTech :

IoT, Internet of Things

A standardized protocol to respond to the challenges of the IoT

The arrival of 5G has put the Internet of Things back in the spotlight, with the promise of an influx of connected objects in both the professional and private spheres. However, before witnessing the projected revolution, several obstacles remain. This is precisely what researchers at IMT Atlantique are working on, and they have already achieved results of global significance.

The Internet of Things (IoT) refers to the interconnection of various physical devices via the Internet for the purpose of sharing data. Sometimes referred to as the “Web 3.0”, this field is set to develop rapidly in the coming years, thanks to the arrival of new networks, such as 5G, and the proliferation of connected objects. Its applications are infinite: monitoring of health data, the connected home, autonomous cars, real-time and predictive maintenance on industrial devices, and more.

Although it is booming, the IoT still faces major challenges. “We need to respond to three main constraints: energy efficiency, interoperability and security,” explains Laurent Toutain, a researcher at IMT Atlantic. But there is one problem: these three aspects can be difficult to combine.

The three pillars of the IoT

First, energy is a key issue for the IoT. For most connected objects, the autonomy of a smartphone is not sufficient. In the future, a household may have several dozen such devices. If they each need to be recharged every two or three days, the user will have to devote several hours to this task. And what about factories that could be equipped with thousands of connected objects? In some cases, these are only of value if they have a long battery life. For example, a sensor could be used to monitor the presence of a fire extinguisher at its location and send an alert if it does not detect one. If you have to recharge its battery regularly, such an installation is no longer useful.

For a connected object, communication features account for the largest share of energy consumption. Thus, the development of IoT has been made possible by the implementation of networks, such as LoRa or Sigfox, allowing data to be sent while consuming little energy.

The second issue is interoperability, i.e. the ability of a product to work with other objects and systems, both current and future. Today, many manufacturers still rely on proprietary universes, which necessarily limits the functionalities offered by the IoT. Take the example of a user who has bought connected light bulbs from two different brands. They will not be able to control them via a single application.

Finally, the notion of security remains paramount within any connected system. This observation is all the more valid in the IoT, especially with applications involving the exchange of sensitive data, such as in the health sector. There are indeed many risks. An ill-intentioned user could intercept data during transmission, or send false information to connected objects, thus inducing wrong instructions, with potentially disastrous consequences.

Read more on I’MTech: The IoT needs dedicated security – now

On the Internet, methods are already in place to limit these threats. The most common is end-to-end data encryption. Its purpose is to make information unreadable while it is being transported, since the content can only be deciphered by the sender and receiver of the message.

Three contradictory requirements?

Unfortunately, each of the three characteristics can influence the others. For example, by multiplying the number of possible interlocutors, interoperability raises more security issues. But it also affects energy consumption. “Today, the Internet is a model of interoperability,” explains Laurent Toutain. For this, it is necessary to send a large amount of information each time, with a high degree of redundancy. It offers remarkable flexibility, but it also takes up a lot of space.” This is only a minor disadvantage for a broadband network, but not for the IoT, which is constrained in its energy consumption.

Similarly, if you want to have a secure system, there are two main possibilities. The first is to close it off from the rest of the ecosystem, in order to reduce risks, which radically limits interoperability.

The second is to implement security measures, such as end-to-end encryption, which results in more data being sent, and therefore increased energy consumption.

Reducing the amount of data sent, without compromising security

For about seven years, Laurent Toutain and his teams have been working to reconcile these different constraints, in the context of the IoT. “The idea is to build on what makes the current Internet so successful and adapt it to the constrained environments, says the researcher. We are therefore taking up the principles of the encryption methods and protocols used today, such as HTTP, but taking into account the specific requirements of the IoT”.

The research team has developed a compression mechanism named SCHC (Static Context Header Compression, pronounced “chic”). It aims to improve the efficiency of encryption solutions and provide interoperability in low-power networks.

For this purpose, SCHC works on the headers of the usual Internet protocols (IP, UDP and CoAP), which contain various details: source address, destination address, location of the data to be read, etc. The particularity of this method is that it takes advantage of the specificity of the IoT: a simple connected object, such as a sensor, has far fewer functions than a smartphone. It is then possible to anticipate the type of data sent. “We can thus free ourselves from the redundancy of classic exchanges on the web, says Laurent Toutain. We then lose flexibility, which could be inconvenient for standard Internet use, but not for a sensor, which is limited in its applications”.

In this way, the team at IMT Atlantique has achieved significant results. It has managed to reduce the size of the headers traditionally sent, weighing 70-80 bytes, to only 2 bytes, and to 10 bytes in their encrypted version. “A quantity that is perfectly acceptable for a connected object and compatible with network architectures that consume very little energy,” concludes the researcher.

A protocol approved by the IETF

But what about that precious interoperability? With this objective, the authors of the study approached the IETF (Internet Engineering Task Force), the international organization for Internet standards. The collaboration has paid off, as SCHC has been approved by the organization and now serves as the global standard for compression. This recognition is essential, but is only a first step towards effective interoperability. How can we now make sure that manufacturers really integrate the protocol into their connected objects? For this, Laurent Toutain has partnered with Alexander Pelov, also a researcher at IMT Atlantic, in order to found the start-up company Acklio. The company works directly with industrialists and offers them solutions to integrate SCHC in their products. It thus intends to accelerate the democratization of the protocol, an effort supported in particular by  €2 million in funds raised at the end of 2019.

Read more on I’MTech Acklio: linking connected objects to the Internet

Nevertheless, industrialists remain to be convinced that the use of a standard is also in their interest. To this end, Acklio also aims to position SCHC among the protocols used within 5G. To achieve this, it will have to prove itself with the 3GPP (3rd Generation Partnership Project) which brings together the world’s leading telecommunications standards bodies. “A much more constraining process than that of the IETF,” however, warns Laurent Toutain.

Bastien Contreras