Projet MAESTRIA AVC

A European consortium for early detection of stroke and atrial fibrillation

The European project MAESTRIA, launched in March 2021 and set to run 5 years, will take on the major challenges of data integration and personalized medicine with the aim of preventing heart rhythm problems and stroke. How? By using artificial intelligence approaches to create multi-parametric digital tools. Led by Sorbonne University and funded by the European Union to the tune of €14 million, the project brings together European, English, Canadian and American partners. An interview with Anne-Sophie Taillandier, Director of Teralab, IMT’s Big Data and AI platform, which is a member of the consortium.   

In what health context was the MAESTRIA developed?

Anne-Sophie Taillandier – Atrial fibrillation (AF), heart rhythm disorder and stroke are major health problems in Europe. Most often, they are the clinical expression of atrial cardiomyopathy, which is under-recognized due to a lack of specific diagnostic tools.

What is the aim of MAESTRIA?

AST  MAESTRIA (for Machine Learning Artificial Intelligence for Early Detection of Stroke and Atrial Fibrillation) aims to prevent the risks associated with atrial fibrillation in order to ensure healthy ageing in the European population. Multidisciplinary research and stratified approaches (involving adapting  a patient’s treatment depending on his/her biological characteristics) are needed to diagnose and treat AF and stroke.

What technologies will be deployed?

AST  “Digital twin” technologies, a powerful data integrator combining biophysics and AI, will be used to generate virtual twins of human heart atria using patient-specific data.

MAESTRIA will create digital multi-parametric digital tools based on a new generation of biomarkers that integrate artificial intelligence (AI) and big data from cutting-edge imaging, electrocardiography and omics technologies (including physiological responses modulated by individual susceptibility and lifestyle factors). Diagnostic tools and personalized therapies for atrial cardiomyopathy will be developed.

Unique experimental large-animal models, ongoing patient cohorts and a prospective cohort of MAESTRIA patients will provide rigorous validation of the new biomarkers and tools developed. A dedicated central laboratory will collect and harmonize clinical data. MAESTRIA will be organized as a user-centered platform that is easily accessible via clinical parameters commonly used in European hospitals.

What is the role of Teralab, IMT’s Big Data and AI platform?

AST – The TeraLab team, led by Natalie Cernecka and Luis Pineda, is playing a central role in this project, in three ways. First of all, TeraLab will be involved in making heterogeneous, sensitive health data available for the consortium, while ensuring legal compatibility and security.

Second, TeraLab will build and manage the data hub for the project data, and make this data available to the team of researchers so that they can aggregate and analyze it, and then build a results demonstrator for doctors and patients.

And last but not least, TeraLab will oversee the data management plan or DMP, an essential part of the management of any European project. It is a living document that sets out a plan for managing the data used and generated within the framework of the project. Initiated at the start of the project, this plan is updated periodically to make sure that it still appropriate in light of how the project is progressing. It is even more necessary when it’s a matter of health data management.

Who are the partners for MAESTRIA ?

AST – MAESTRIA is a European consortium of 18 clinicians, scientists and pharmaceutical industry representatives, at the cutting edge of research and medical care for AF and stroke patients. A scientific advisory board including potential clinician users will help MAESTRIA respond to clinical and market needs.

It’s an international project, focused on the EU countries, but certain partners come from England, Canada and the United States. Oxford University, for example, has developed interesting solutions for the processing and aggregation of cardiological data. It is a member of the consortium and we will, of course, be working with its researchers.

We have important French partners such as AP-HP (Assistance Publique-Hôpitaux de Paris, Paris Hospital Authority) involved in data routing and management. The project is coordinated by Sorbonne University.

What are the next big steps for the project?

AST – The MAESTRIA has just been launched, the first big step is making the data available and establishing the legal framework.

Because the data used in this project is heterogeneous – hence the importance of aggregating it – we must understand the specific characteristics of each kind of data (human data, animal data, images, medical files etc.) and adapt our workspaces to users. Since this data is sensitive, security and confidentially challenges are paramount.

Learn more about MAESTRIA

Interview by Véronique Charlet

Data visualization

Understanding data by touching it

Reading and understanding data is not always a simple task. To make it easier, Samuel Huron is developing tools that allow us to handle data physically. The Télécom Paris researcher in data visualization and representation seeks to make complex information understandable to the general public.

Before numbers were used, merchants used clay tokens to perform mathematical operations. These tokens allowed them to represent numerical data in a graphical, physical way, and handle it easily. This kind of token is still used in schools today to help young children become familiar with complex concepts like addition and cardinality. “This very simple tool can open the door to highly complex representations, such as the production of algorithms,” says Samuel Huron, a researcher at Télécom Paris in the fields of data visualization and interactive design.

His work aims to use this kind of simple representation tool to make data understandable to non-experts. “The best way to visualize data is currently programming, but not all of us are computer engineers,” says Samuel Huron. And while providing the entire population with training in programming may be a commendable idea, it is not very realistic. This means that we must trust experts who, despite their best intentions, may provide a subjective interpretation of their observation data.

In an effort to find an alternative, the researcher has taken up the idea of clay tokens. He organizes workshops for people with little or no familiarity with handling data, and proposes using tokens to represent a data set. For example, to represent their monthly budget. Once they have the tokens in their hands, the participants must invent graphical models to represent this data based on what they want to get out of it. “One of the difficult and fundamental things in graphical data analysis is choosing the useful representation for the task, and therefore targeting the visual variables to understand your batch of data,” explains Samuel Huron. “The goal is to teach the participants the concept of visual mapping.”

Video: how does physical representation of data work:

The visualization is not only intended to represent this data, but to develop the capacity to read and make sense of it. Participants must find a way to structure the data themselves. They are then encouraged to think critically by observing the other productions, in particular to see whether they can be read and understood. “In certain workshops with many different data sets, such as the budget of a student, an employed individual, or a retiree, participants can sometimes identify a similar profile just by looking at the representations of other participants,” adds the researcher.

Citizen empowerment 

This transmission method poses real challenges for democracy in our era of digitization of knowledge and the rise of data. To understand the important issues of today and respond to the major challenges we face, we must first understand the data from various fields.  Whether related to budgets, percentage of votes, home energy consumption, or the daily number of Covid-19 cases, all of this knowledge and information is provided in the form of data, either raw or processed to some extent. And to avoid dealing with abstract figures and data, it is represented visually.  Graphs, curves and other diagrams are provided to illustrate this data. But these visual representations are not always understandable to everyone. “In a democracy, we need to understand this data in order to make informed decisions,” says Samuel Huron.

Citizen empowerment is based on the power to make decisions, taking into account complex issues such as climate change or the national budget breakdown. Likewise, to tackle the coronavirus, an understanding of data is required in order to assess risk and implement health measures of varying strictness. It was this societal issue that pushed Samuel Huron to look for data visualization methods that can be used by everyone, with a view to data democratization. This approach includes open data policies and transparency, of course, as well as useful and user-friendly tools that allow everyone to understand and handle this data.

Thinking about the tools

“A distinctive characteristic of human beings is producing representations to process our information,”  says the researcher. “The alphabet is one such example: it’s a graphical encoding to store information that other people can find by reading it.”  Humankind has the capacity to analyze images to quickly identify and examine a set of diagrams, without even thinking at times. These cognitive capacities enable operations in visual space that are otherwise very difficult and allow them to be carried out more quickly than with another kind of encoding, such as numbers.

This is why we tend to illustrate data graphically when we need to explain it. But this is time-consuming and graphs must be updated with each new data set. On the virtual side, there is no shortage of software spreadsheet solutions that allow for dynamic, efficient updates. But they have the drawback of limiting creativity. “Software programs like Excel are great, but all of the possible actions are predefined. Expressiveness of thought is limited by the models offered by the tool,”  says Samuel Huron.

Far from considering tokens to be the ideal solution, the researcher says that they are above all a tool for teaching and raising awareness. “Tokens are a very simple format that make it possible to get started quickly with data visualization, but they remain quite limited in terms of representation,” he says. He is working with his colleagues to develop more complicated workshops with larger data sets that are more difficult to interpret.  In general, these workshops also aim to think about ways to promote the use of data physicalization, with more varied tools and data sets, and therefore more diverse representations. Other studies intend to consider the value of the data rather than that resulting from its handling.

By proposing these data physicalization kits, the researchers can study participants’ thinking. They can therefore better understand how individuals understand, format, handle and interpret data. These observations in turn help the researchers improve their tools and develop new ones that are even more intuitive and user-friendly for different groups of individuals. To go further, the researchers are working on a scientific journal devoted to the topic of data physicalization planned for late 2021. It should  assess the state of the art on this topic, and push research in this area even further. Ultimately, this need to understand digital data may give rise to physical tools to help us grasp complex problems – literally. 

By Tiphaine Claveau.

Antenne 5G

What is beamforming?

Beamforming is a telecommunications technology that enables the targeted delivery of larger and faster signals. The development of 5G relies in particular on beamforming. Florian Kaltenberger, researcher at EURECOM and 5G specialist, explains how this technology works.

What is beamforming?

Florian Kaltenberger: Beamforming consists of transmitting synchronized waves in the form of beams, from an antenna. This makes it possible to target a precise area, unlike conventional transmission systems that emit waves in all directions. This is not a new technology, it has been used for a long time in satellite communication and for radar. But it is entering mobile telecommunications for the first time with 5G.

Why is beamforming used in 5G?

FK: The principle of 5G is to direct the wave beams directly to the users. This allows a limited interference between the waves, having a more reliable signal, and saving energy. These three conditions are some of the demands that 5G must meet. Because the waves of 5G signals have high frequencies, they can carry more information, and do so faster. This system avoids congestion in hotspots, i.e. there will be no throughput problems in places where there are many connections simultaneously. Also, the network can be more locally diverse: there can be completely different services used on the same network at the same time.

How does network coverage work with this system?

FK: Numerous antennas are needed. There are several reasons for this. The size of the antennas is proportional to the length of the waves they generate. As the wavelength of 5G signals is smaller, so is the size of the antennas: they are only a few centimeters long. But the energy that the antennas are able to emit is also proportional to their size: a 5G antenna alone could only generate a signal with a range of about ten meters. In order to increase the range, multiple 5G antennas are assembled on base stations and positioned to target a user whenever they are in range. This allows a range of about 100 meters in all directions. So you still need many base stations to cover the network of a city. With beamforming it is possible to target multiple users in the same area at the same time, as each beam can be directed at a single user.

How are the beams targeted to users and how are they then tracked?

The user’s position signal is received by different parts of the 5G antennas. On each of these parts, there is a shift in the time of arrival of the signal, depending on the angle at which it hits the antenna. With mathematical models that incorporate these different time shifts, it is possible to locate the user and target the beam in their direction.

Then you have to track the users, and that’s more complicated. Base stations use sets of fixed beams that point at preset angles. There is a mechanism that allows the user’s device to measure the power of the received beam relative to adjacent beams. The device sends this information back to the base station, which is then able to choose the best beam.

What are the main difficulties when it comes to implementing beamforming?

FK: Today the 5G network still cannot work without the 4G network because of the short range of the beams, which makes its use only effective and useful in urban environments, and especially in hotspots. In more remote areas, 4G takes over. Beamforming cannot be used for a mobile user located several hundred meters from the antenna – let alone a few kilometers away in the countryside. Another difficulty encountered is the movement of users as they move from one base station to another. Algorithms are being developed to anticipate these movements, which is also what we are working on at EURECOM.

Should we expect the next generation of mobile communications, 6G, to go even further than beamforming?

FK: With every generation, there is a breakthrough. For example, 3G was initially designed as a voice communication network, then all the aspects related to internet data were implemented. For 4G it was the other way around: the network was designed to carry internet data, then voice communication was implemented. The operating principle of 6G has not yet been clearly defined. There’s roughly one new generation of cell phones every ten years, so it shouldn’t be long before the foundation for 6G is laid, and we’ll know more about the future of beamforming.

Interview by Antonin Counillon

Data collection protection, GDPR impact

GDPR: Impact on data collection at the international level

The European data protection regulation (GDPR), introduced in 2018, set limits on the use of trackers that collect personal data. This data is used to target advertising to users. Vincent Lefrère, associate professor in digital economy at Institut Mines-Télécom Business School, worked with Alessandro Acquisti from Carnegie Mellon University to study the impact of the GDPR on tracking users in Europe and internationally.

What was your strategy for analyzing the impact of GDPR on tracking users in different countries?

Vincent Lefrère: We conducted our research on online media such as Le Monde in France or the New York Times in the United States. We looked at whether the introduction of the GDPR has had an impact on the extent to which users are tracked and the amount of personal data collected.

How were you able to carry out these analyses at the international level?

VL: The work was carried out in partnership with researchers at Carnegie Mellon University in the United States, in particular Alessandro Acquisti, who is one of the world’s specialists in personal digital data. We worked together to devise the experimental design and create a wider partnership with researchers at other American universities, in particular the Minnesota Carlson School of Management and Cornell University in New York.

How does the GDPR limit the collection of personal data?

VL: One of the fundamental principles of the GDPR is consent. This makes it possible to require websites that collect data to obtain users’ consent  before tracking them. In our study, we never gave our consent or explicitly refused the collection of data. That way, we could observe how a website behaves in relation to a neutral user. Moreover, one of the important features of GDPR is that it applies to all parties who wish to process data pertaining to European citizens. As such, the New York Times must comply with the GDPR when a website visitor is European. 

How did you compare the impact of the GDPR on different media?

VL: We logged into different media sites with IP addresses from different countries, in particular with French and American IP addresses.

We observed that American websites limit tracking more than European websites, and therefore better comply with the GDPR, but only when we were using a European IP address.  It would therefore appear that the GDPR has been more dissuasive on American websites for these users. However, the American websites increased the tracking of American users, for whom the GDPR does not apply.  One hypothesis is that this increase is used to offset the loss of data from European users.

How have online media adapted to the GDPR?

VL: We were able to observe a number of effects. First of all, online media websites have not really played along. Since mechanisms of consent are somewhat vague,  the formats developed in recent years have often encouraged users to accept personal data collection rather than reject it. There are reasons for this: data collection has become crucial to the business model of these websites, but little has been done to offset the loss of data resulting from the introduction of the GDPR, so it is understandable that they have stretched the limits of the law in order to continue offering high quality content for free. With the recent update by the French National Commission on Information Technology and Liberties (CNIL) to fight against this, consent mechanisms will become clearer and more standardized.  

In addition, the GDPR has limited tracking of users by third parties, and replaced it with tracking by first parties. Before, when a user logged into a news site, other companies such as Google, Amazon or Facebook could collect their data directly on the website. Now, the website itself tracks data, which may then be shared with third parties.

Following the introduction of the GDPR, the market share of Google’s online advertising service increased in Europe, since Google is one of the few companies who could pay the quota for the regulation, meaning it could pay the price of ensuring compliance. This is an unintended, perverse  consequence: smaller competitors have disappeared and there has been a concentration of ownership of data by Google.  

Has the GDPR had an effect on the content produced by the media?

VL: We measured the quantity and quality of content produced by the media. Quantity simply reflects the number of posts. The quality is assessed by the user engagement rate, meaning the number of comments or likes, as well as the number of pages viewed each time a user visits the website.

In the theoretical framework for our research, online media websites use targeted advertising to generate revenue. Since the GDPR makes access to data more difficult, it could decrease websites’ financing capacity and therefore lead to a reduction in content quality or quantity. By verifying these aspects, we can gain insights into the role of personal data and targeted advertising in the business model for this system.   

Our preliminary results show that after the introduction of the GDPR, the quantity of content produced by European websites was not affected, and the amount of engagement remained stable. However, European users reduced the amount of time they spent on European websites in comparison to American websites. This could be due to the the fact that certain American websites may have prohibited access to European users, or that American websites covered European topics less since attracting European users had become less profitable. These are hypotheses that we are currently discussing.

We are assessing these possible explanations by analyzing data about the newspapers’ business models, in order to estimate how important personal data and targeted advertising are to these business models.  

By Antonin Counillon

intelligence artificielle, artificial intelligence

Is there intelligence in artificial intelligence?

Jean-Louis Dessalles, Télécom Paris – Institut Mines-Télécom (IMT)

Nearly a decade ago, in 2012, the scientific world was enthralled by the achievements of deep learning.  Three years later, this technique enabled the AlphaGo program to beat Go champions. And this frightened some people. Elon MuskStephen Hawking and Bill Gates were worried about an imminent end to the human race, replaced by out-of-control artificial intelligence.

Wasn’t this a bit of an exaggeration? AI thinks so. In an article it wrote in 2020 in The Guardian, GPT-3, a gigantic neural network with 175 billion parameters explains:

“I’m here to convince you not to worry. Artificial intelligence will not destroy humans. Believe me.”

At the same time, we know that the power of computers continues to increase. Training a network like GPT-3 was literally unconceivable just five years ago. It is impossible to know what its successors may be able to do five, ten or twenty years from now. If current neural networks can replace dermatologists, why would they not eventually replace all of us? Let’s turn the question around.

Are there any human mental abilities that remain strictly out of reach for artificial intelligence?

The first thing that comes to mind are skills involving our “intuition” or “creativity.” No such luck – AI is coming for us in these areas too. This is evidenced by the fact that works created by programs are sold at high prices, reaching nearly half a million dollars at times. When it comes to music, everyone will obviously form their own opinion, but we can already recognize acceptable bluegrass or works that approach Rachmaninoff in imitations by the MuseNet program created, like GPT-3, by OpenAI.

Should we soon submit with resignation to the inevitable supremacy of artificial intelligence? Before calling for a revolt, let’s take a look at what we’re up against. Artificial intelligence relies on many techniques,  but its recent success is due to one in particular: neural networks, especially deep learning ones. Yet a neural network is nothing more than a matching machine. The deep neural network that was much discussed in 2012 matched images –  a horse, a boat, mushrooms – with corresponding words. Hardly a reason to hail it as a genius.

Except that this matching mechanism has the rather miraculous property  of being “continuous.” If you present the network with a horse it has never seen, it recognizes it as a horse. If you add noise to an image, it does not disturb it. Why? Because the continuity of the process ensures that if the input to the network changes slightly, its output will change slightly as well. If you force the network, which always hesitates, to opt for its best response, it will probably not vary: a horse remains a horse, even if it is different from the examples learned, even if the image is noisy.

Matching is not enough

But why is such matching behavior referred to as “intelligent?” The answer seems clear: it makes it possible to diagnose melanoma, grant bank loans, keep a vehicle on the road, detect disorders in physiological signals and so forth. Through their matching ability, these networks acquire forms of expertise that require years of study for humans. And when one of these skills, for example, writing a press article, seems to resist for a while, the machine must simply be fed more examples, as was the case with GPT-3, so that it can start to produce convincing results.

Is this really what it means to be intelligent? No, this type of performance represents only a small aspect of intelligence, at best. What the neural networks do resembles learning by heart. It isn’t, of course, since networks continuously  fill in the gaps between the examples with which they have been presented. Let’s call it’s almost-by heart. Human experts, whether doctors, pilots or Go players, often act the same way when they decide instinctively, based on the large number of examples learned during their training. But humans have many other powers too.

Learning to calculate or reason over time  

Neural networks cannot learn to calculate. There are limits to matching operations like 32+73 and their result. They can only reproduce the strategy of the struggling student who tries to guess the result and sometimes happens upon the right answer. If calculating is too difficult, what about a basic IQ test like: continue the sequence 1223334444. Matching based on continuity is of no help to see that the structure, n repeated n times, continues with 5 fives. Still too difficult? Matching programs cannot even guess that an animal that is dead on Tuesday will not be alive on Wednesday. Why? What do they lack?  

Modeling in cognitive science has shown the existence of several mechanisms, other than matching based on continuity, which are all components of human intelligence. Since their expertise is entirely precalculated, neural networks cannot reason over time to determine that a dead animal remains dead or to understand the meaning of the sentence “he still isn’t dead” and the oddity of this other sentence: “he is not still dead.” And digesting large amounts of data in advance is not enough to allow them to recognize new structures that are very simple for us, such as groups of identical numbers in the sequence 1223334444. Their almost-by-heart strategy is also blind to unprecedented anomalies.

Detecting anomalies is an interesting example, since we often judge others’ intelligence based precisely on this. A neural network will not “see” that a face is missing a nose. Based on continuity, it will continue to recognize the person, or may confuse him or her with someone else. But it has no way of realizing that the absence of a nose in the middle of a face represents an anomaly.

There are many other cognitive mechanisms that are inaccessible to neural networks. Research is being conducted on the automation of these mechanisms. It implements operations carried out at the time of processing,  while neural networks simply make associations learned in advance.

With a decade of perspective on deep learning, the informed public is starting to see neural networks  more as “super-automation” and less as intelligent. For example, the media recently reported on the astonishing performances of the DALL-E program, which produces creative images based on a verbal description – for example, images that DALL-E imagined based on the terms “avocado-shaped chair” on the OpenAI site. We now hear much more tempered assessments than the alarmist reactions following the release of AlphaGo: “It is quite impressive, but we must not forget that it is an artificial neural network, trained to perform a task; there is no creativity or form of intelligence.” (Fabienne Chauvière, France Inter, 31 January 2021)

No form of intelligence? Let’s not be too demanding, but at the same time, let’s remain clear-sighted about the huge gap that separate neural networks from what would be a true artificial intelligence.

Jean‑Louis Dessalles wrote “Des intelligences très artificielles” (Very Artificial Intelligence)  published by Odile Jacob (2019).

Jean-Louis Dessalles, Associate professor at Télécom Paris – Institut Mines-Télécom (IMT)

This article has been republished from The Conversation under a Creative Commons license. Read the original article in French.

Fonds IMT numérique

AlertSmartCity, Cook-e, Dastra, DMS, GoodFloow, JobRepublik, PlaceMeet and Spectronite supported by the “honor loan” scheme

The members of the IMT Digital Fund-IGEU, IMT and Fondation Mines-Télécom held a meeting on 23 February. On this occasion, 8 start-ups from the incubators of IMT Mines Albi, IMT Atlantique, IMT Lille Douai, Télécom Paris, Mines Saint-Étienne, Télécom SudParis and Institut Mines-Télécom Business School were awarded 18 honor loans (interest-free) for a total of €340,000.

L’attribut alt de cette image est vide, son nom de fichier est logo_AlertSmartCity-1.jpg.

AlertSmartCity (the incubator at IMT Mines Albi) wishesto create an interoperable alert management platform, to be used in the event of a major risk (natural, industrial, health or terrorist disaster). This platform will allow municipalities to send qualified and geolocalized alerts to their public institutions (schools, cultural, sports, hospitals, administrations and other palaces receiving the public) using dedicated communication terminals that are resilient to network outages and are interactive (bi-directional communication). These reception terminals will allow disaster victims to report back to the crisis unit.
Two honor loans of €20,000 each.

L’attribut alt de cette image est vide, son nom de fichier est logo_Cook-e.png.

Cook-e (Télécom Paris Novation Center) proposes a multi-function connected robot for restaurant kitchens. The restaurant owner enters a recipe into the robot software and then loads the ingredient tanks. These tanks can be stored cool, dry or warm. The robot then prepares the recipe: it measures out, cuts, cooks, mixes and cleans itself automatically. It can prepare all dishes with mixed ingredients in small pieces: pasta with sauce, salads, bowls, rice, meat and fish in small pieces, vegetable side dishes, etc.
One honor loan of €20,000 and two honor loans of €10,000. Find out more

L’attribut alt de cette image est vide, son nom de fichier est logo_Dastra_300x85.jpg.

Dastra (IMT Starter) is the simple, guided data governance solution that enables data protection professionals to meet the requirements of the GDPR, save time, and develop a company data culture. One small step for DPOs, one giant leap for data protection!
Two honor loans of €8,000 and two honor loans of €12,000. Find out more

L’attribut alt de cette image est vide, son nom de fichier est logo_DMS-logistics.jpg.

DMS (the incubator at Mines Saint-Etienne) is an AI platform for managing and anticipating container flows, allowing for the fluidity of port and land container traffic. It connects all the players in the container port logistics chain (shipowners/terminals) with those located inland (carriers/depots).
Three honor loans of €20,000 each. Find out more

L’attribut alt de cette image est vide, son nom de fichier est logo_Goodflow.jpg.

GoodFloow (the IMT Lille Douai incubator) automates the tracking and management of reusable packaging. Their service consists of using IoT in individual packaging along with a web/mobile app. This solution eliminates asset management and change management issues related to packaging, makes flows more reliable, and enables a sustainable transition in logistics.
One honor loan of €40,000. Find out more

L’attribut alt de cette image est vide, son nom de fichier est logo_Jobrepublik.jpg.

JobRepublik (IMT Starter) is the meeting point between companies in need of temporary workers and anyone looking for additional income. The start-up offers the first open marketplace dedicated to “blue collar” freelancers that allows a direct relationship between 700,000 small businesses in the logistics, retail and restaurant sectors and 3 million independent workers.
Two honor loans of €20,000 each. Find out more

L’attribut alt de cette image est vide, son nom de fichier est logo_placemeet.jpg.

Placemeet (incubator at IMT Atlantique) is a simple and intuitive platform optimized for engagement and interaction. Attendees can move between rooms as if it were a physical event and enjoy an exceptional experience from anywhere in the world.
Two honor loans of €20,000 each. Find out more

L’attribut alt de cette image est vide, son nom de fichier est logo_Spectronite.png.

Spectronite (Télécom Paris Novation Center) has developed a breakthrough technology, with the implementation of an architecture based on Software Defined Radio, which can offer speeds up to 10 Gbps over very long distances, i.e. up to 20x the speed offered by traditional products. Spectronite offers a disruptive innovation for mobile operators, enabling them to deploy 4G and soon 5G, even in territories where fiber is not available.
One honor loan of €10,000 and one honor loan of €30,000. Find out more

The honor loan program

Created in late 2011 under the aegis of the Grandes Écoles and Universities Initiative (IGEU) association, the IMT Digital Fund for honor loans is co-financed by the Fondation Mines-TélécomBPI France and Revital’Emploi.

Start-up, prêts d'honneur

Pam Tim, Examin, Cylensee and Possible supported through honor loan program

The members of the IMT Digital Fund, IGEU, IMT and Fondation Mines-Télécom met on 6 April. On this occasion, four start-ups developed through incubators at IMT Atlantique, Télécom Paris, Télécom SudParis and Institut Mines-Télécom Business School obtained 8 honor loans for a total of €160,000.

L’attribut alt de cette image est vide, son nom de fichier est Logo_Cylensee.png.

Cylensee (IMT Atlantique incubator) develops and produces connected electrochromic contact lenses for  the general public. These contact lenses have a feature that allows users to change the color of their iris almost instantly at their convenience. Activated by a remote control or via a smartphone, these lenses allow users to change their eye color with just one click, whether to stand out from the crowd, try out a new look, make an impression or just for fun.
• Two €20,000 honor loans • 

L’attribut alt de cette image est vide, son nom de fichier est Logo_Examin.jpg.

The Examin platform (Télécom Paris Novation Center) is a regulatory and technical compliance management solution for companies with a focus on cybersecurity and data protection. Using a collaborative and scalable workspace, customers benefit from continuous reporting on their compliance or that of their suppliers and can easily involve employees in their actions to reduce compliance risks.
• Two €20,000 honor loans • 
Learn more

Pam Tim (Télécom Paris Novation Center) specializes in the well-being of children aged 3-6 by providing them with an opportunity to intuitively learn the spatial and temporal reference points that structure the day using a watch without numbers or hands! This life assistant for children relies on a patented display of combinations of pictograms (PhD thesis) depicting key moments throughout the day. This connected watch also gives parents peace of mind as it allows them to anticipate household or peripheral risks their children may encounter at any moment through a very low-power Bluetooth© geofencing solution.
• Two €20,000 honor loans • 
Learn more

L’attribut alt de cette image est vide, son nom de fichier est Logo_Possible.png.

Possible (IMT Starter, the Télécom SudParis et IMT-BS incubator) is a project that encourages circular, environmentally-friendly, zero-waste, ethical fashion. Possible is a BtoC platform for renting clothes and accessories based on a monthly subscription. The subscription allows users to rent a selection of several pieces by brands that promote ethical and responsible practices, for a set cost. This project responds to the issue: How can individuals enjoy an unlimited wardrobe on a limited budget and in an environmentally-friendly way?
• Two €20,000 honor loans • 
Learn more

digital simulation

Digital simulation: applications, from medicine to energy

At Mines Saint-Étienne, Yann Gavet uses image simulation to study the characteristics of an object. This method is more economical in terms of time and cost, and eliminates the need for experimental measurements. This field, at the intersection of mathematics, computer science and algorithms, is used for a variety of applications ranging from the medical sector to the study of materials.

What do a human cornea and a fuel cell electrode have in common? Yann Gavet, a researcher in applied mathematics at Mines Saint-Étienne1 is able to model these two objects as 2D or 3D images in order to study their characteristics. To do this, he uses a method based on random fields. “This approach consists in generating a synthetic image representing a surface or a random volume, i.e. whose properties will vary from one point to another across the plane or space,” explains the researcher. In the case of a cornea, for example, this means visualizing an assembly of cells whose density differs according to whether we look at the center or the edge. The researcher’s objective? To create simulations with properties as close as possible to the reality.

Synthetic models and detecting corneal disorders

The density of cells that make up our cornea –the transparent part at the front of the eye– and its endothelium, provides information about its health. To perform these analyses, automatic cell detection and counting algorithms have been developed using deep neural networks. Training them thus requires access to large databases of corneas. The problem is that these do not exist in sufficient quantity. “However, we have shown that it is possible to perform the training process using synthetic images, i.e. simulated by models,” says Yann Gavet.

How does it work? Using deep learning, the researcher creates graphical simulations based on key criteria: size, shape, cell density or the number of neighboring cells. He is able to simulate cell arrangements, as well as complete and realistic images of corneas. However, he wants to combine the two. Indeed, this step is essential for the creation of image databases that will allow us to train the algorithms. He focuses in particular on the realism of the simulation results in terms of cell geometry, gray levels and the “natural” variability of the observations.

Although he demonstrated that training using synthetic corneal data does not require perfectly realistic representations to perform well, improving accuracy will be useful for other applications. “As a matter of fact, we transpose this method to the simulation of material arrangements that compose fuel cell electrodes, which requires more precision,” explains the researcher.

Simulating the impact of microstructures on the performance of a fuel cell

The microstructure of fuel cell electrodes impacts the performance and durability of solid oxide cells. In order to improve these parameters, researchers want to identify the ideal arrangement of the materials that make up the electrodes, i.e., how they should be distributed and organized. To do this, they play with the “basic” geometry of an electrode: its porosity and its material particle size distribution. This therefore targets the morphological parameters on which the manufacturers intervene when designing the electrodes.

To identify the best performing structures, one method would be to build and test a multitude of configurations. This is an expensive and time-consuming practice. The other approach is based on the simulation and optimization of a large number of configurations. Subsequently, a second group of models simulating the physics of a battery can in turn identify which structures best impact the battery’s performance.

The advantage of the simulations is that they target specific areas within the electrodes to better understand their operation and their overall impact on the battery. For example: exchange zones such as “triple phase” points where ionic, electronic and gaseous phases meet, or exchanges between material surfaces. “Our model allows us to evaluate the best configuration, but also to identify the associated manufacturing process that offers the best energy efficiency for the battery,” says Yann Gavet.

In the medium term, the researcher wishes to continue his work on a model whose dimensions are similar to the observations made in X-ray tomography. An algorithmic challenge that will require more computing time, but will also lead to results that are closer to the reality of the field.

1 Yann Gavet is a researcher at the Georges Friedel laboratory, UMR CNRS/Mines Saint-Étienne

Anaïs Culot

SONATA

SONATA: an approach to make data sound better

Telecommunications must transport data at an ever-faster pace to meet the needs of current technologies. But this data can be voluminous and difficult to transport at times. Communication channels are congested and transmission limits are reached quickly. Marios Kountouris, a telecommunications researcher at EURECOM, has recently received ERC funding to launch his SONATA project. It aims to shift the paradigm for processing information to speed up its transmission and make future networks more efficient.

We are close to the fundamental limit for transmitting data, from one point to another,” explains Marios Kountouris, a telecommunications researcher at EURECOM. Most of the current research in this discipline focuses on how to organize complex networks and on improving the algorithms that optimize these networks. Few projects, however, focus on improving the transfer of data between transmitters and receivers. This is precisely the focus of Marios Kountouris’ SONATA project, funded by a European ERC consolidator grant.

Telecommunications are generally based on Shannon’s information theory, which was established in the 1950s,” says the researcher. In this theory, a transmitter simply sends information through a transmission channel, which models it and transfers it to a receiver which then reconstructs it. The main obstacle to get around is the noise accompanying the signal when it passes through the transmission channel. This constraint can be overcome by algorithm-based signal processing and by increasing throughput. “This usually takes place in the same way, regardless of the message being transmitted. Back in the early days, and until recently, this was the right approach,” says the researcher.

Read more on I’MTech: Claude Shannon, a legacy transcending digital technology

Transmission speed for real-time communication

Today, there is an increasing amount of communication between machines that reason in milliseconds. “Certain messages must be transmitted quickly or they’re useless,” says Marios Kountouris. For example, in the development of autonomous cars, if the message collected relates to the detection of a pedestrian on the road so as to make the vehicle brake, it is only useful for a very short period of time. “This is what we call the age, or freshness of information, which is a very important parameter in some cases,” explains Marios Kountouris.

Yet, most transmission and reconstruction is slowed down by surplus information accompanying the message. In the previous example, if the system for detecting pedestrians is a camera that captures images with details about all the surrounding objects, a great deal of the information in the transmission and processing will not contribute to the system’s purpose. For the researcher, “the sampling, transmission and reconstruction of the message must no longer be carried out independently of another. If excess, redundant or useless data accompanies this process, there can be communication bottlenecks and security problems.”  

The semantics of messages

For real-time communication, the semantics of the message  — its meaning and usefulness— take on particular importance. Semantics make it possible to take into account the attributes of the message and adjust the format of its transmission depending on its purpose. For example, if a temperature sensor is meant to activate the heating system automatically when the room temperature is below 18° C, the attribute of the transmitted message is simply a binary breakdown of temperature: above or below 18°C.

Through the SONATA project, Marios Kountouris seeks to develop a new communication paradigm that takes the semantic value of information into account. This would make it possible to synchronize different types of information collected at the same time through various samples and make more optimal decisions. It would also significantly reduce the volume of transported data as well as the associated energy and resources required.

The success of this project depends on establishing semantic metrics that are concrete, informative and traceable,” explains the researcher. Establishing the semantics of a message means preprocessing sampling by the transmitter depending on how it is used by the receiver. The aim is therefore to identify the most important, meaningful or useful information in order to determine the qualifying attributes of the message. “Various semantic attributes can be taken into account to obtain a conformal representation of the information, but they must be determined in advance, and we have to be careful not to implement too many attributes at once,” he says.

The goal, then, is to build communication networks with key stages for processing the semantics associated with information. First, semantic filters must be used to avoid unnecessary redundancy when collecting information. Then, semantic preprocessing must be carried out in order to associate the data with its purposes. Signal reconstruction by the receiver would also be adapted to its purposes. All this would be semantically-controlled, making it possible to orchestrate the information collected in an agile way and reuse it efficiently, which is especially important when networks become more complex.

This is a new approach from a structural perspective and would help create links between communication theory, sampling and optimal decision-making. ERC consolidator grants fund high-risk, high-reward projects that aim to revolutionize a field, which is why SONATA has received this funding. “The sonata was the most sophisticated form of classical music and was pivotal to its development. I hope that SONATA will be a major step forward in telecommunications optimization,” concludes Marios Kountouris.

By Antonin Counillon

Reconnecting data to the sensory world of human beings: a challenge for industry 4.0 already taken up by SMEs

Gérard Dubey, Institut Mines-Télécom Business School and Anne-Cécile Lafeuillade, Conservatoire national des arts et métiers (CNAM)

Given the magnitude of uncertainty and risk of disruption threatening the economic and social order, the digitization of productive activities is often presented as a panacea.

Whether it’s a question of industrial production, creating new jobs or reclaiming lost productivity, the narrative supporting industry 4.0 focuses only on the seemingly infinite potential of digital solutions.

Companies that are considered to be active in the digital sector are upheld as trailblazers that will drive recovery. The Covid crisis has only accentuated this trend, which already appeared in the industry of the future programs.

Automated data captures downstream in the production process (with cameras, sensors, information extraction at each workstation) and their algorithmic processing upstream (big data, data science) hold the promise of “agile” management (precise, flexible, personalized) in real production time – something every industrial process strives for.

Nevertheless, this digital transformation seems to have forgotten two key facts: a company is first and foremost a group of human beings that cannot be reduced to numerical targets or abstract productivity criteria. And more importantly for industry, the relationship with the material is still a crucial dimension, which unites work teams and gives them meaning.

As such, there is something of a disconnect – which is only growing – between the stated ambitions of major industrial players and the realities on the ground.

The relationship with the material at industrial SMEs

From this perspective, although their role in (incremental) innovation is all too often overlooked and poorly understood, industrial SMEs have a lot to teach us. This is mainly due to the kind of specific relationships they continue to maintain with the material, if this is understood as a reality concerned as much with human aspects (motions, experiential knowledge, sense knowledge) as physical aspects (measurable). As they are rooted in their local communities and have withstood the test of time, they are accustomed to developing, arranging and organizing heterogeneous expertise and modes of intelligence about reality.

The surveys conducted in many industrial SMEs by a multidisciplinary research team show how important this relationship is to their directors. This can be seen in a number of aspects and affirms that their decisions are rooted in the reality of the situation.

When the CEO of Maison Caron digitized its site and moved to Saclay in 2019, she did not do away with the “old” coffee roaster from the 1950s. Coffee roasting may be rooted in reality and the senses, but the magic of aromas happens because the nose, eyes and even ears know how to control it – traditional know-how passed down through her family that she now shares with some employees of the company.

At Guilbert Express, another family business that makes high-end welding equipment, the director has observed a progressive loss of know-how in France, following the strategy to offshore export-oriented production in recent years. By going digital, he hopes to unite scattered work teams based on a shared, intercultural experience.

At Avignon Céramic, a company in Berry that makes ceramic cores for the aeronautical industry, quality comes down to daily interactions with the material. And this material – inherently unstable, unpredictable, a source of variability and uncertainty, almost “living” and virtually independent – in turn requires know-how that is itself living, precise and agile, to make a final product that is an acceptable part for the supply chain of major manufacturers.

In industry, human expertise makes it possible to better understand the material. Shutterstock

This is particularly apparent in Opé40, one of the key steps of the quality processes implemented to identify defects in the ceramic cores. This visual and tactile inspection identifies infinitesimal details and requires extensive expertise. But this step is also decisive in establishing collective knowledge and building a work team: while some employees are responsible for detecting defects, everyone works together to use these traces to discover the meaning, similar to a police investigation.

It is through this relationship with material that the work community is brought together. From this perspective, SMEs appear to possess what may be one of the best-kept industrial secrets: how human beings and material contribute to a shared transformation process.

While traceability and numerical data analysis systems play a growing role in the organization of work by companies seeking to harness this human expertise of the material – which is sometimes passed down through generations – the challenge is to integrate these transformations without giving up this culture.   

Humans – the key to adaptation

The director of AQLE, a company located in the North of France that specializes in electronic assemblies, raises questions about the risks posed by loss of meaning among employees if part of their activity is carried out by digital technology. To what extent is it possible to eliminate movements that are considered to be tedious without ultimately affecting the activity in its entirety – meaning developing, maintaining and acquiring expertise (training, learning, ways of passing it down)?

Similarly, the generational gap observed in the use of digital technology is often highlighted (in documents encouraging this transformation) to express the idea that younger employees could become mentors for older employees and act as intermediaries for the digital transformation of a company. But once again, the problem is more interesting and complicated than that.

Training only the oldest employees is not enough to ensure a successful digital transformation. Shutterstock

First of all, there is a need to develop new relationships and balance between the concrete (sensory, manual etc.) and digital world. From this perspective, the archaic/innovative dichotomy (often echoed in the cognitive/manual one) appears to be futile. It is the handing over of practices that matters, and not the “disruptive” approach, which more often than not results in approaches that are out of step with realities on the ground. The entire purpose of digital technology is precisely to urge us to question our forms of attachment to work.

One of the challenges of a successful “digital transition” will undoubtedly be to manage to combine or reconcile these different ways of acting on reality in a complementary manner – rather than through an either/or approach. It must be accepted in advance that the information obtained by one method or another is of a different nature. Digital processing of data cannot replace knowledge of the material, which relies on humans’ propensity to sense that which, like themselves, is living, fragile and impermanent.  

Humans’ familiarity with living material, far from being obsolete, may well be one of the keys to adapting to the upheavals taking place and those yet to come. The Covid crisis has shattered certainties and upended strategies. The time has come to remember that human expertise, and the collective memory on which it is founded, are not merely variables to be adjusted, but the very condition for agility, which is increasingly required in a globalized economy marked by uncertainty.  

Gérard Dubey, Sociologist, Institut Mines-Télécom Business School and Anne-Cécile Lafeuillade, PhD student in ergonomics, Conservatoire national des arts et métiers (CNAM)

This article has been republished from The Conversation under a Creative Commons license. Read the original article (in French).