anonymized data, Teralab

Is anonymized data of any value?

Anonymization is still sometimes criticized as a practice that supposedly makes data worthless, as it deletes important information. The CNIL decided to prove the contrary through the Cabanon project conducted in 2017. It received assistance from the IMT big data platform, TeraLab, for anonymizing the data of New York taxis and showing the possibility of creating a transportation service.

 

On 10 March 2014, an image published on Twitter by the New York taxi commission sparked Chris Whong’s curiosity. It wasn’t the information on the vehicle occupancy rate during rush hour that caught the young urban planner’s interest. Rather, what caught his eye was the source of the data, cited at the bottom, that had allowed New York City’s Taxi and Limousine Commission (NYC TLC) to create the image. Through a tweet comment, he joined another Twitter user, Ben Wellington, in asking if the raw data was available. What ensued was a series of exchanges that enabled Chris Whong to retrieve the dataset through a process that is tedious, yet accessible to anyone with enough determination to cut through all the red tape. Once he had the data in his possession, he put it online. This allowed Vijay Pandurangan, a computer engineer, to demonstrate that the identity of the drivers, customers, and their addresses could all be found using the information stored on the taxi logs.

Problems in anonymizing open datasets are not new. They were not even new in 2014 when the story emerged about NYC TLC data. Yet this type of case still persists. One of the reasons is that anonymized datasets are deemed less useful than their unfiltered counterparts. Removing any possibility of tracing the identity would amount to deleting the information. In the case of the New York taxis, for example, this would mean limiting the information on the taxis’ location to geographical areas, rather than indicating the coordinates to the nearest meter. For service creators who want to build applications, and data managers who want the data to be used as effectively as possible, anonymizing means losing value.

As a fervent advocate for the protection of personal data, the French National Commission for Information Technology and Civil Liberties (CNIL) decided to confront this misconception. The Cabanon project, led by the CNIL laboratory of digital innovation (LINC) in 2017, took on the challenge of anonymizing the NYC TLC dataset and using specific scenarios for creating new services. “There are several ways to anonymize data, but there is no miracle solution that fits every purpose,” warns Vincent Toubiana, in charge of anonymizing the datasets for the project, which has since transferred from the CNIL to the ARCEP. The Cabanon team therefore had to think of a dedicated solution.

 

Spatial and temporal degradation

First step: the GPS coordinates were replaced by the ZCTA code, the U.S. equivalent of postal codes in France. This is the method chosen by Uber to ensure personal data security. This operation degrades the spatial data; it drowns the taxi’s departure and arrival positions in areas composed of several city blocks. However, this may prove insufficient in truly ensuring the anonymity of the customers and drivers. At certain times during the night, sometimes only one taxi made a trip from one area of the city to another. Even if the GPS positions are erased, it is still possible to link the geographical position and identity.

Therefore, in addition to the spatial degradation, we had to introduce a temporal degradation,” Vincent Toubiana explains. The time slots are adapted to avoid the single customer problem. “In each departure and arrival area, we look at all the people who take a taxi in time slots of 5, 15, 30 and 60 minutes,” he continues. In the data set, the time calibration is adjusted so that no time slot has fewer than ten people. If, despite these precautions, a single customer is within the largest time slot of 60 minutes, the data is simply deleted. According to Vincent Toubiana, “the goal is to find the best mathematical compromise for keeping a maximum amount of data with the smallest possible time intervals.

In the 2013 data used by the CNIL, the same data made public by Chris Whong, NYC TLC made over 130 million trips. The double degradation operations therefore demanded significant computing resources. The handling of the data to be processed using different temporal and spatial slicing required assistance from TeraLab, IMT’s big data platform. “It was essential for us to work with TeraLab in order to query the database to see the 5-minute intervals, or to test the minimum number of people we could group together,” Vincent Toubiana explains.

Read more on I’MTech: Teralab, a big data platform with a European vision

Data visualization assisting data usage

Once the dataset has been anonymized in this way, it must be proven useful. To facilitate its reading, a data visualization in the form of a choropleth map was produced—a geographical representation associating a color with each area based on the amount of trips. “The visual offers a better understanding of the differences between anonymized data and that which is not, and facilitates the analysis and narration of this data,” says Estelle Hary, designer at the CNIL who produced the data visualization.

To the left: a map representing the trips using non-anonymized data. To the right: choropleth map representing the journeys with a granularity that ensures anonymity.

 

Based on this map, they began reflection on the kinds of services that could be created using anonymized data. The map helped identify points in Brooklyn where people order taxis to complete their journey home. “We started thinking about the idea of a private transportation network that would complement public transport in New York,” says Estelle Hary. Since they would be cheaper than taxis, this private public transport could cover areas neglected by buses. “This is a typical example of a viable service that anonymized data can be used to create,” she continues. In this case, the information that was lost to protect the personal data had no impact. The processed data set is just as effective. And this is only one example of a potential use. “By linking anonymized datasets with other public data, the possibilities are multiplied,” the designer explains. In other words, the value of an open dataset depends on our capacity for creativity.

There will certainly always be cases in which the degradation of raw data limits the creation of a service. This is the case for more personalized services. But perhaps anonymity should be seen, not as a binary value, but as a gradient. Instead of seeing anonymity as a characteristic that is present or absent from datasets, wouldn’t it be more appropriate to consider several accessible degrees of anonymity according to the exposure of the data set and the control over the use? That is what is the CNIL proposed in the conclusion of the Cabanon project. The data could be publicly accessible in fully anonymized form. In addition, the same dataset could be accessible in versions that are less and less anonymized, with, in exchange, a more significant level of control over the use.

[box type=”info” align=”” class=”” width=””]

TeraLab, big data service for researchers

Teralab is a big data and artificial intelligence platform serving research, innovation and education. It is led by Institut Mines-Télécom (IMT) and the Group of National Schools of Economics and Statistics (GENES). Teralab was founded in 2014 through a call for projects by the Investments for the Future program called “Cloud Computing and Big Data”. The goal of the platform is to aggregate the demand for software and infrastructure for projects involving large volumes of data. It also offers security and sovereignty, enabling stakeholders to entrust their data to the researchers with confidence. [/box]

HMI, human-machine interactions

Coming soon: new ways to interact with machines

Our electronic and computing devices are becoming smaller, more adapted to our needs, and closer to us physically. From the very first heavy, stationary and complex computers, we have moved on to our smartphones, ever at the ready. What innovations can we next expect? Éric Lecolinet, researcher in human-machine interactions at Télécom ParisTech, answers our questions about this rapidly changing field.

How do you define human-machine interactions?

Human-machine interactions refer to all the interactions between humans and electronic or computing devices, as well as the interactions between humans via these devices. This includes everything from desktop computers and smartphones to airplane cockpits and industrial machines! The study of these interactions is very broad, with applications in virtually every field. It involves developing machines capable of representing data that the user can easily interpret and allowing the user to interact intuitively with this data.

In human-machine interactions, we distinguish between output data, which is sent by the machine to the human, and input data, which is sent from the human to the machine.  In general, the output data is visual, since it is sent via screens, but it can also be auditory or even tactile, using vibrations for example. Input data is generally sent using a keyboard and mouse, but we can also communicate with machines through gestures, voice and touch!

The study of human-machine interactions is a multidisciplinary field. It involves IT disciplines (software engineering, machine learning, signal and image processing), as well as social science disciplines (cognitive psychology, ergonomics, sociology). Finally, design, graphic arts, hardware, and new materials are also very important areas involved in developing new interfaces.

 

How have human-machine interactions changed?

Let’s go back a few years to the 1950s. At that time, computer devices were computing centers: stationary, bulky machines located in specialized laboratories. The humans were the ones who adapted to the computers: you had to learn their language and become an expert in the field if you wanted to interact with them.

The next step was the personal computer, the Macintosh, in 1984, following work by Xerox Parc in the 1970s. What a shock! The computer belonged to you, was in your office and home. First the desktop PC was developed, followed by the laptop that you could take with you anywhere: here the idea of ownership emerged, and machines become mobile. And finally, these first personal computers were made to facilitate interaction. It was no longer the user’s job to learn the machine’s language. The machine itself facilitated the interaction, particularly with the WIMP (Window Icon Menu Pointer) model, the desktop metaphor.

While we can observe the miniaturization of machines since the 2000s, the true breakthrough came with the iPhone in 2007.  This was a new paradigm, which significantly redefined the human-machine interface, making the primary goal that of adapting as much as possible to humans. Radical choices were made: the interface was made entirely tactile, with no physical keyboard, and it featured a high-resolution multi-touch screen, proximity sensors that turned off the screen when lifted to the user’s ear, and a display that adapted to the way the phone was held.

Machines therefore continue to become smaller, more mobile, and closer to the body, like connected watches and biofeedback devices. In the future, we can imagine having connected jewelry, clothing, and tattoos! And more importantly, our devices are becoming increasingly intelligent and adapted to our needs. Today we no longer learn how to use the machines; the machines adapt to us.

 

There has been a lot of talk in the media lately about vocal interfaces, which could be the next revolution in human-machine interfaces.

This technology is very interesting. A lot of progress is being made and it will become more and more useful. There is certainly a lot of work being carried out on these vocal interfaces, and more services are now available, but, for me, they will never replace the keyboard and mouse. For example, it is not suitable for word processing or digital drawing! It is great for certain, specific tasks, like telling your telephone, “find me a movie for tonight at 8 o’clock,” while walking or driving, or for warehouse workers who must give machines instructions without using their hands. Yet the interactional bandwidth, or the amount of information that can be transferred using this method, remains limited. Also, for daily use, confidentiality issues arise: do you really want to speak out loud to your smartphone in the subway or at the office?

 

We also hear a lot of talk about brain-machine interfaces…

This is promising technology, especially for people with severe disabilities. But it is far from being available for use by the general public, in video games for example, which require very fast interaction times. The technology is slow and restrictive. Unless people accept to have electrodes implanted into their brains, they will need to wear a net of electrodes on their heads, which will need to be calibrated to prevent them from moving, and conductive gel will need to be applied to improve their effectiveness.

A technological breakthrough could theoretically soon make applications of this technology available for the general public, but I think many other innovations will be on the market before these brain-machine interfaces.

 

What fields of innovation will human-machine interfaces be geared towards?

There are a lot of possibilities, a wide range of research is currently being carried out on the topic! Many projects are focusing on gestural interactions, for example, and some devices have already appeared on the market. The idea is to use 2D or 3D gestures, and different types of touch and the pressure to interact with a smartphone, computer, TV, etc. At Telecom ParisTech, for example, we have developed a prototype for a smart watch called “Watch it”, which can be controlled using a vocabulary of gestures. This allows you to interact with the device without even looking at it!

https://www.youtube.com/watch?time_continue=10&v=8Q8Feehr0Dc

This project also allowed us to explore the possibilities of interacting with a connected watch, a small object that is difficult to control with our fingers. We thought of using the watch strap as a touch interface, to scroll through the screen of the watch. There will be ongoing development in these small, wearable objects that are so close to our bodies. For example, we could someday have connected jewelry! For example, researchers are working on interfaces projected directly onto the skin to interact with these types of small devices.

Tangible interfaces are also an important area for research. The idea is that virtually all the objects in our everyday lives could become interactive, with interactions related to their use: there would be no need to search through different menus, the object would correspond to a specific function. These objects can also change shape (shape changing interfaces). In this field of research, we have developed Versapen: an augmented, modular pen. It is composed of modules that the user can arrange to create new functions for the object, and each module can be programmed by the user. We therefore have a tangible interface that can be fully customized!

Finally, one of the major revolutions in human-machine interfaces is augmented reality. This technology is recent but is already functional. There are applications everywhere, for example in video games and assistance during maintenance operations. At Télécom ParisTech, we worked in collaboration with EDF to develop augmented reality devices. The idea is to project information onto the control panels of nuclear power plants, in order to guide employees in maintenance operations.

It is very likely that augmented reality, both virtual and mixed, will continue to develop in the coming years. The so-called GAFA companies (Google, Amazon, Facebook, Apple) invest considerable sums in this area. These technologies have already made huge leaps, and their use is becoming widespread. In my opinion, this will be one of the next key technology areas, just like big data and artificial intelligence today. And as a researcher specialized in human-machine interfaces, I feel it is important to position ourselves in this area!

Read more on I’MTech: What is augmented reality?

[box type=”info” align=”” class=”” width=””]

Social Touch Project: conveying emotions to machines through touch

Tap, rub, stroke… Our touch gestures communicate information about our emotions and social relationships. But what if this became a way to communicate with machines? The Social Project Touch, launched in December 2017, seeks to develop a human-machine interface capable of transmitting tactile information via connected devices. Funded by the ANR and DGA, the project is supported by the LTCI laboratory at Télécom ParisTech, ISIR, the Heudyasic laboratory and i3, a CNRS mixed research unit that includes Télécom ParisTech, Mines ParisTech and École Polytechnique. “You could send touch messages to contacts, “emotitouches”, which would convey a mood, a feeling,” explains Éric Lecolinet, the project coordinator. “But it could also be used for video games! We want to develop a bracelet that can send heat, cold, puffs of air, vibration, tactile illusions, that could enable a user to communicate via touch with an avatar in a virtual reality environment.”[/box]

eOdyn

eOdyn: technological breakthrough in the observation of ocean surface currents

Existing methods for measuring ocean surface currents are expensive, difficult to implement and limited in the amount of information they can gather. The solution proposed by the eOdyn startup, based on the algorithmic analysis of data from maritime traffic, represents a real technological breakthrough. It is very affordable and more effective, enabling the real-time and delayed observation of marine currents across the entire surface of the globe. Using this technology, eOdyn is developing many different services for the those involved in maritime transport, from the offshore oil industry, to sea rescue and research on climate change. Incubated at IMT Atlantique, the startup’s customers and partners include CMA CGM, Airbus Defence and Space, the European Space Agency and IFREMER.

 

 

Two main solutions are used to measure marine currents on the high seas. The first is to throw drifting buoys into the sea equipped with GPS and track their travel. This historical technique is still just as effective, yet it is costly and difficult to implement. It requires the buoys to be spread throughout the ocean in a homogeneous manner, and the batteries of the drifting sensors must be regularly changed. The second method is to measure the ocean currents using the six altimetry satellites that are currently in orbit. At best, when the six satellites are located above the ocean, and not above the continents, they can obtain six measurements of the water’s surface at a given time that can be used to deduce the presence and direction of the currents. This technique also requires considerable financial means, as evidenced by the €1.2 billion price tag on the next project involving the development, launch and three-year operation of the new-generation altimetric satellite known as SWOT.

The eOdyn startup now proposes a simple and inexpensive solution for the digital analysis of open data, including AIS data (Automatic Identification System), to measure the currents in real time and delayed time. This data allows ships to be operated as sensors collecting information on the currents. Considering that approximately 100,000 are sailing around the globe simultaneously, this represents 100,000 measurement points, as compared to the six points currently provided by satellites.

 

A simple, affordable and complete solution for observing marine currents

Each ship emits an AIS message every ten seconds. This message contains information on the vessel itself, its position and its path. All data is collected by an international network of receivers and antennas installed along the coastlines or on satellites in low Earth orbit. These AIS messages were initially designed and used as a maritime security system for preventing collisions. “It was necessary to create an open system that allows for exchanges of unencrypted information between vessels, so that they can see each other,” says Yann Guichoux, the startup’s founder.

eOdyn collects and analyzes these AIS data and submits them to an algorithm capable of analyzing each vessel’s path in different navigation conditions and produce a model of hydrodynamic behavior. Based on the vessel’s movement in relation to its planned path, it deduces the direction and intensity of the current it is affected by. “The algorithm needs a significant amount of data to function,” explains Yann Guichoux. “This is where the concepts of big data and machine learning come into play. For the algorithm, there is a learning phase for each vessel that is analyzed.

Read more on I’MTech: What is big data?

In addition to being inexpensive, the solution proposed by eOdyn offers more comprehensive data than the altimetry satellites: “Altimetry measurement is limited, because it only obtains information on the current that is perpendicular to the satellite track” Yann Guichoux explains. “The information provided only pertains to a geostrophic current, a theoretical current. The actual current includes this geostrophic current, but also includes the tidal current and the current affected by the wind speed, which eOdyn replicates.

 

Fuel economy, sea rescue and climate research

At first, our business model was to sell the data we obtained. Now, we are progressively moving towards providing value-added services to various sectors,” Yann Guichoux explains. In the field of maritime transport, rather than selling the data directly to companies that do not know how to process and use them, the startup will propose optimal navigation routes that will allow ships to take advantage of driving currents and save on fuel. Furthermore, a monitoring system is being developed for offshore oil companies. It will alert the companies in real time of the presence of whirlpools that could potentially disrupt the drilling operations, cause material damage and the release pollutants into the ocean. Yann Guichoux also plans to develop a drift prediction tool for sea rescue, which will provide an estimation of the location of a person who has drifted out to sea in order to help guide search and rescue operations. Finally, the startup is also interested in providing data for research on climate change, for example to ascertain the slowdown of the Gulf Stream current.

But eOdyn won’t stop there! Using the same algorithmic basis, modified with significant variations, the startup is working on new projects for measuring swells and wind, which will come out in 2018. “A ship is a moving object in the water, subject to the constraints of the currents, swells and waves. When we look at the data and its analysis, we gain an overview of these three parameters,” Yann Guichoux explains. With the development of new tools based on the observation of these phenomena comes the promise of new fields of application waiting to be discovered.

Anne-Sophie Taillandier

IMT, Teralab platform | #BigData #ArtificialIntelligence

[toggle title=”Find here all her articles on I’MTech” state=”open”]

[/toggle]

TechDay, fabrication additive, additive manufacturing

What are the latest innovations in additive manufacturing?

Although additive manufacturing is already fully integrated into industrial processes, it is continuing to develop thanks to new advances in technology. The Additive Manufacturing Tech’Day, co-organized by IMT Mines Alès and Materiautech – Allizé-Plasturgie, brought together manufacturers and industry stakeholders for a look at new developments in equipment and material. José-Marie Lopez Cuesta, Director of the Materials Center at IMT Mines Alès, spoke with us about this event and the latest innovations in 3D printing.

 

What were the objectives of the Additive Manufacturing Tech’Day?

This event, which brought together nearly ninety people, was co-organized by IMT Mines Alès and Materiautech, which is network of institutions that organizes educational, technological and business activities on different plastic materials and processes for manufacturers and students. This provided an opportunity for several industry stakeholders to present their new developments in materials, tools and software through a series of conferences and demonstrations.

For us as researchers, the main objective of this tech day was to present our strategy in this area and build partnerships, particularly with manufacturers, with the aim of initiating projects.

 

What research projects are you currently working on in the area of additive manufacturing?

We have had the machines in the laboratory for a little over a year now, and we are beginning to launch projects. We just started a project focused primarily on engineering, for manufacturing an orthopedic brace, a medical corset. We also have a project in the initial development stages on SLS (Selective Laser Sintering) additive manufacturing technology, in partnership with a company based in Alès, and with potential funding from the region.

 

Has industry successfully taken advantage of 3D printing technologies?

Yes, absolutely. Today, 3D printing is seen as one of the major advanced manufacturing technologies.  It is developing very quickly, with the emergence of new machines and new materials. As a laboratory, we want to be a part of this development.

For manufacturers, the goal is to develop new products with original shapes that could not be formed using traditional processes, while ensuring that they are durable and possess the mechanical properties required for their use.

Although it was initially used for rapid prototyping, 3D printing is now being used in all industrial sectors, particularly in the aerospace and medical industries, due to the complexity of the parts they produce. In the medical industry, additive manufacturing is used to produce prostheses and orthoses, as well as intracorporeal medical devices such as stents, mesh inserted in the arteries to prevent clogging, and surgical screws. Manufacturing these parts requires the use of biocompatible and approved materials, an aspect mastered by certain companies, which produce these materials as polymer powders or wires adapted to additive manufacturing.

In the aeronautics industry, this technology is used especially for printing very specific parts, for example for satellites. It allows parts to be replaced, especially metallic parts produced using molding techniques, by lighter and more functional 3D-printed parts. These parts are redesigned based on the innovations made available through additive manufacturing, which means they can be produced using as little material as possible, resulting in lighter parts.

Finally, 3D printing is perfectly adapted to manufacturing complex replacement parts for older devices that are no longer on the market. We are moving towards production means that are increasingly customized and flexible.

 

In additive manufacturing, what are the latest innovations in materials?

Materials are being developed that are increasingly complex. Nano-composites, for example, which are plastic materials comprising nanometric particles, offer improved mechanical properties, heat resistance and permeability to gas. New bio-composites are also being developed. These materials are composed of bio-based components and have a lower environmental impact than synthetic polymers. Other new materials present new features, such as fireproofing. We are seeking to enter these areas based on the areas of expertise that are already present at the Materials Center of IMT Mines Alès.

 

Beyond new materials, are there any new machines that have introduced significant innovations?

In this field, innovations appear very quickly: new machines are constantly coming out on the market. Some are even able to print several types of materials at the same time, or parts with increasingly complex symmetry. We also see greater precision in the components, and improved surface conditions.

In addition, one of the main issues is the speed of execution: enormous progress has been made in printing objects at greater speeds. This progress is what made it possible for 3D printing to expand beyond rapid prototyping and start being used for manufacturing production parts. In the automotive industry, for example, additive manufacturing technologies are in direct competition with other production processes.

Finally, 3D printers are more and more affordable. You can find €2,000 or €3,000 machines on the market. You can easily acquire a 3D printer for home use or take a sharing economy approach and use the printer within a joint ownership property. Now anyone can manufacture their own parts, and repair or further develop devices.

Also read on I’MTech:

Projet BBM, e-santé, e-Health, business model

e-Health companies face challenges in developing business models

The development of technological tools has opened the way for many innovations in the e-health sector. These products and services allow doctors to remotely monitor their patients and help empower dependent persons. Yet the companies that develop and market these solutions find it very difficult to establish viable and sustainable business models. As part of the Better Business Models project, Charlotte Krychowski, a researcher in management at Télécom École de Management and Myriam Le Goff-Pronost, a researcher in economics at IMT Atlantique, have focused on company case studies to better understand this situation.

 

Connected capsules and blood pressure monitors, platforms for medical consultation by telephone, home automation and remote assistance services for dependent persons… All these innovations are the work of companies in the e-health sector, which has been booming with the development of new technologies. “The innovations in e-health offer real benefits for patients: some of the connected objects, for example, are able to detect when an elderly person falls and alert a doctor,” Myriam Le Goff-Pronost explains. Far from being unnecessary gadgets, these new products and services help establish efficient medical services, while reducing healthcare costs. But despite the quality of the services they offer, companies in the e-health sector face many challenges in establishing viable business models.

The BBM (Better Business Models) project, funded by the ANR, with partners including Myriam Le Goff-Pronost (IMT Atlantique), Charlotte Krychowski (Télécom École de Management), Université de Lille, Université Savoie Mont Blanc and Grenoble École de Management, focuses on the challenges companies face in establishing business models in the areas of e-health and video games. “These two industries were chosen because digital technology plays a predominant role in both, and in France there is a dynamic group of companies in these sectors. Along with twenty researchers from other schools, Charlotte and I have worked on e-health companies,” Myriam Le Goff-Pronost explains. The researchers worked on case studies to understand how business models have developed in these sectors. “We studied businesses that were very different in terms of their size and activities, but also in terms of their success, so that we could study an eclectic and representative panel,” says Myriam Le Goff-Pronost. “This has required a lot of work through regular meetings and interviews with business leaders in order to understand their decisions in terms of their business model and the way these models have developed.” Unfortunately, for now, none of the e-health companies they studied have succeeded in generating profits.

 

Two different worlds with different problems

While all the companies studied experienced economic difficulties, they faced different challenges in establishing a sustainable business model. Myriam Le Goff-Pronost and Charlotte Krychowski observed that the companies could be divided into two distinct groups: the well-being world, geared toward the general public, and the medical world, which proposed medical devices.

From connected scales to wristbands that track activity, the “well-being” products are usually sold directly to the general public. “The main difficulty for the “well-being” companies is that, often, they find themselves competing with big American manufacturers, and it is hard to make their product stand out,” explains Charlotte Krychowski. “Not to mention that they are adversely affected by the ban on marketing health data.”

In the medical world, because of how difficult it is to obtain marketing authorizations, and the health system’s structural problems, it takes a long time to reach the break-even point. While waiting to reach this break-even point in the health sector, Bodycap, which offers a connected capsule for measuring body temperature in real time, turned to veterinary medicine and top-level sports to survive. Yet there are many possible applications for human health: monitoring a patient during a lengthy surgical operation, post-surgery follow-up after the return home, monitoring patients confined in sterile room, etc. “To survive, companies are turning to sectors where regulation is much more flexible, no need for marketing authorizations! And in top-level sports, prices can be very elastic,” Charlotte Krychowski explains.

Finally, there is a reason it is so complicated for companies offering medical services and devices to establish business models in the e-health sector: the patient, who would benefit from the service, is not the one who pays. Social security, complementary health insurance organizations, EHPAD (residential homes for dependent older people), are just a few examples of the intermediaries that complicate the process of establishing cost-effective and sustainable business models. The Médecin Direct Platform, which offers medical consultations by telephone, has chosen to build partnerships with insurance providers to establish a viable business model. The insurer offers the service and pays the company. “The State’s validation of their remote medical consultation activity has enabled them to write prescriptions remotely, which really helped them economically” Myriam Le Goff-Pronost explains. “Still, the company is not yet generating profit…”

 

Structural problems to resolve

Although these companies are struggling to find a suitable business model, this does not mean they are not doing well or have made bad choices,” says Charlotte Krychowski. “For most of them, it will take years to become profitable, because the viability of their business model depends on the long-term resolution of structural problems in France’s health sector.” Because while innovations in e-health help in prevention, hospitals and doctors are paid on a fee-for-service basis, for example for a consultation or operation. “Currently, whether a patient is doing well or poorly after an operation has no impact whatsoever on the hospital. And what’s more, if the person must be re-hospitalized due to complications, the hospital earns more money! The pay received should be higher if the operation goes well and the post-surgery follow-up is carried out properly,” says Charlotte Krychowski. In her opinion, our health system will have to transition to a flat-rate fee for each patient receiving follow-up care in order to integrate e-health innovations and provide companies in the sector with a favorable environment for their economic development. It will also be necessary to train caregivers to use the digital tools, since they will increasingly need to provide follow-up care using connected devices.

Furthermore, other legislative barriers hinder the success of e-health companies and the development of their innovations, such as marketing authorizations procedures. “The companies studied that produce medical devices are required to conduct clinical trials that are extremely long in relation to the speed of technological developments,” says Charlotte Krychowski. Finally, current legislation prohibits the marketing of sensitive health data, which deprives companies of an economic lever.

The difficulties encountered by all the companies in the study led the two researchers to publish the case studies and their findings in a book that is currently in progress, which will give business leaders keys to establishing their business models. According to Myriam Le Goff, this work must be continued to produce specific recommendations to help e-health entrepreneurs break into this complicated market.

Also read on I’MTech:

Logiciel Industrie futur, software

Software: A key to the industry of the future

On January 30 and 31, 2018 in Nantes, the aLIFE symposium focused on the software industry’s contribution to the industry of the future. It was being organized by IMT Atlantique, and aimed to bring together manufacturers and researchers in order to target shared problems, and respond to national and European calls for projects in the future: cloud manufacturing, data protection, smart factories, etc. Hélène Coullon, Guillaume Massonnet and Hugo Bruneliere, researchers at IMT Atlantique and co-organizers of the symposium, answered our questions about this event and the issues surrounding the industry of the future.

 

What were the objectives of the aLIFE symposium?

Hélène Coullon The objective of this symposium were to hold a meeting bringing together researchers from IMT Atlantique, other academic players such as the Technical University of Munich or Polytechnique Montreal, and manufacturers like Dassault Systems and Airbus, to focus on the theme of the industry of the future, and more specifically on the contribution of the software industry to the industry of the future.

Guillaume Massonnet We were also seeking to adopt a coherent and constructive approach to connecting the research we are conducting with the needs of industry, and to determine which challenges we should respond to today. Finally, we wanted to form a consortium of stakeholders from the industrial and academic worlds to respond to European and national calls for projects.

What themes have been addressed?

HC The main themes included smart factories, cloud manufacturing, which is related to cloud computing, the modeling of processes, resources and data (physical and software), and the related optimization issues.

Hugo Bruneliere On the one hand, we are inspired by software approaches that can be applied to the context of industrial systems, which include a significant physical aspect, and on the other hand, there is the question of how to position and use the software within these new industrial processes. These two aspects are complementary, but they can be addressed independently. This is a relatively new area. A great deal of research has been carried out on the topic, and initiatives are beginning to emerge, but there is still much work to do.

What is cloud manufacturing?

HC – Cloud computing allows IT resources to be rented “on demand”, for example: processors, data storage, software resources, etc. Cloud manufacturing is the application of cloud computing concepts aimed at transferring IT resources to industrial resources. In other words, cloud manufacturing makes it possible to move towards “on-demand” production.

For example, we can imagine a user making a production request using an online platform. Via the cloud, this platform would distribute the tasks to be performed using different means of production, located in different geographical places.

What can cloud manufacturing offer manufacturers?

HB – It allows them to render their production units more profitable. Large companies have machines they have invested in, and they want to operate them as much as possible to make them profitable. If they do not use them continuously, they can make the unused production capabilities available to startups that do not have the means to invest in these machines. This allows large companies to have a better return on investment and prevents smaller companies from having to invest in expensive equipment.

We can also imagine a new way of producing for individuals, no longer by mass, but on demand, with the possibility of greater product customization.

How can this software contribute to data security?

HC -Industrial data is sensitive by definition. Of course, in the context of the industry of the future, with distributed production, the data will travel through external networks and be stored on remote servers. They will therefore potentially be exposed to attacks. We must secure the entire path taken by the data by using cryptography, for example, among many other techniques.

What are smart factories?

GM – A smart factory is an industry in which the various means of production are automated, intelligent, and able to communicate with each other. This raises issues related to the size of the data flow: issues of big data. We must therefore take this information into account to integrate it into the production decisions and their optimization.

The new modes of production break with traditional practices, in which production chains were dedicated to specific, mass-produced products. Today, new machines have become reconfigurable, and the same production lines are used for several types of products. Therefore, there is a move towards an industry that increasingly seeks to customize its production.

And these changes will take place through the development of specific software architectures?

HB – Through the aLIFE symposium, we wish to show that the contribution of software is necessary in responding to the problems facing the industry of the future. We have significant experience in software in our laboratory, and we intend to build on this expertise to show that we can provide the industry with solutions.

Soft Landing

Soft Landing: A partnership between European incubators for developing international innovation

Projets européens H2020How can European startups be encouraged to reach beyond their countries’ borders to develop internationally? How can they come together to form new collaborations? The Soft Landing project, in which business incubator IMT Starter is participating, allows growing startups and SMEs to discover the ecosystems of different European incubators. The goal is to offer them support in developing their business internationally. 

 

Europe certainly acknowledges the importance of each country developing its own ecosystem of startups and SMEs, yet each ecosystem is developing independently,” explains Augustin Rads, business manager at IMT Starter. The Soft Landing project, which receives funding from the European Union’s Horizon 2020 program, seeks to find a solution to this problem. “The objective is, on the one hand to promote exchanges between the different startup and SME ecosystems, and on the other hand to provide these companies with a more global vision of the European market beyond their borders,” he explains.

Soft Landing resulted from collaboration between five European incubators: Startup Division in Lithuania, Crosspring Lab in the Netherlands, GTEC in Germany,  F6S Network in the UK, and IMT Starter, the incubator run by Télécom SudParis and Télécom École de Management in Évry, France. As part of the project, each of these stakeholders must first discover the startup and SME ecosystems developing in their partners’ countries. Next, interested startups that see a need for this support will be able to temporarily join an incubator abroad, for a limited period.

 

Discovering each country’s unique characteristics

Over the course of the two-year project, representatives from each country will visit partner incubators to discover and learn about the startup ecosystem that is developing there. The representatives are also seeking to identify specific characteristics, skills, and potential markets in each country that could interest startups in their own country. “Each country has its specific areas of interest: the Germans work a lot on the theme of the industry, whereas in the Netherlands and Lithuania, the projects are more focused on FinTech, “Augustin Radu adds. “At IMT Starter, we are more focused on information technologies.”

Once they have completed these discovery missions, the representatives will return to their countries’ startups to present the potential opportunities. “At IMT Starter, we have planned a mission in Germany in March, another in the Netherlands in April, in May we will host a foreign representative, and in June we will go to Lithuania,” Augustin Radu explains. “There may be other missions outside the European Union as well, in the Silicon Valley and in India.

 

Hosting foreign startups in the incubators

Once each incubator’s specific characteristics and possibilities have been defined, the startups can request to be hosted by a partner ecosystem for a limited period. “As an incubator, we will host startups that will benefit from our customized support.” says Augustin Radu. “They will be able to move into our offices, take advantage of our network of industrial partners, and work with our researchers and laboratories. The goal is to help them find talent to help grow their businesses.

Of course, there is a selection process for startups that want to join an incubator,” the business manager adds. “What are their specific needs? Does this match the host country’s areas of specialization?” In addition, the startup or SME should ideally have an advanced level of maturity, be well rooted in its country of origin and have a product that is already finalized. According to Augustin Radu, these are the prerequisites for a company to benefit from this opportunity to continue its development abroad.

 

Remove barriers that separate startups and research development

While all four of the partner structures are radically different, they are all very well-rooted in their respective countries,” the business manager explains. IMT Starter is in fact the only incubator participating in this project that is connected to a higher education and research institution, IMT. A factor that Augustin Radu believes will greatly enhance the French incubator’s visibility.

In addition to fostering the development of startups abroad, the Soft Landing project also removes barriers between companies and the research community by proposing that researchers at schools associated with IMT Starter form partnerships with the young foreign companies. “Before this initiative, it was difficult to imagine a French researcher working with a German startup! Whereas today, if a young European startup joins our incubator because it needs our expertise, it can easily work with our laboratories.”

The project therefore represents a means of accelerating the development of innovation, both by building bridges between the research community and the startup ecosystem, as well as by pushing young European companies to seek an international presence. “For those of us in the field of information technology, if we don’t think globally we won’t get anywhere!” Augustin Radu exclaims. “When I see that in San Francisco, companies immediately think about exporting outside the USA, I know our French and European startups need to do the same thing!” This is a need the Soft Landing project seeks to fulfill by broadening the spectrum of possibilities for European startups. This could allow innovations produced in the Old World to receive the international attention they deserve.

xenon, xenon1t, Dominique Thers, IMT Atlantique

Xenon instruments for long-term experiments

From the ancient gnomon that measured the sun’s height, to the Compton gamma ray observatory, the microscope, and large-scale accelerators, scientific instruments are researchers’ allies, enabling them to make observations at the smallest and largest scales. Used for both basic and applied research, they help test hypotheses and push back the boundaries of human knowledge. At IMT Atlantique, researcher Dominique Thers, motivated by the development of instruments, has become a leading expert on xenon technologies, which are used in the search for dark matter as well as in the medical field.

 

The search for observable matter

Detecting dark matter for the first time is currently one of science’s major challenges. “It would be a little like radioactivity at the end of the 19th century, which disrupted the Maxwell-Boltzmann equations,” Dominique Thers explains. Based on the velocity measurements of seven galaxies carried out by Swiss astronomer Fritz Zwicky in 1933, which contradicted the known mass of these galaxies, a hypothesis was made that a type of matter existed that is unobservable using the currently available means and that represents 27% of the matter in the universe. “Unobservable” means that the particles that form this matter interact with traditional baryonic particles (protons, neutrons, etc.) in a very unusual manner. To detect them, the probability of this type of interaction occurring must be radically increased, and there must be a way of fully ensuring that no false event can trigger the alert.

In this race to be the first to detect dark matter, developing more powerful instruments is paramount.  “The physics of particle detectors is a discipline that has become increasingly complex,” he explains, “it is not sufficiently developed in France and around the world, and it currently requires significant financial resources, which are difficult to obtain.” China and the United States have greatly invested in this area, and Germany is the most generous contributor, but there are currently few French teams: “It is a very tense context.” Currently, the most sensitive detector for hunting down dark matter is located in Italy, where it was built under the mountain of Gran Sasso for the XENON1T experiment. Detection is based on the hope of an interaction between a particle of dark matter and one of the xenon atoms, which in this experiment are liquid. The energy deposited from this type of interaction generates two different phenomena –  scintillation and ionization – which are observable and can be used to distinguish the background. 150 people from 25 international teams are working together on this experiment at the largest underground laboratory in the world.

This research that has spanned generations must be justified. “Society asks, what is the purpose is of observing the nature of dark matter? We may only find the answer 25 years from now,” the researcher explains. Dark matter represents enormous potential: five times more prevalent than ordinary matter, it is a colossal reservoir of energy. The field has greatly developed in 30 years, and xenon has opened a new area of research, with prospects for the next 20 years. Dominique Thers is participating in European reflection for experiments in 2025, with the goal of achieving precise observations at lower ranges.

Xenon, a rare, expensive and precious gas

While the xenon used for this experiment possesses remarkable properties (density, no radioactive isotopes), it is unfortunately a raw material that is rare and cannot be manufactured. It is extracted by distilling air in its liquid phase, using a very costly process. Xenon is indeed present in the air at 0.1 ppm (parts per million), or “a tennis ball of xenon gas in the volume of a hot air balloon,” Dominique Thers explains, or “one ton of xenon from 2,000,000 tons of liquid oxygen“.

The French company Air Liquide is the global leader in the distribution of xenon. The gas is used to create high-intensity lights, as a propellant for space travel and as an anesthetic. It is their diamond “in a luxury market subject to speculation.” And luxury products require luxury instruments. For those created by the researcher’s team, xenon is used in its purest form possible. “The objective is to have less than one ppb (part per billion) of oxygen in the liquid xenon,” the scientist explains. This is made possible due to a closed-circuit purification system that continuously cleans the equipment, particularly from all the impurities from the walls.

 

xenon

The technology of xenon in the form of cryogenic liquid is reserved for the experts. Dominique Thers’ team has patented expertise in storing, distributing and recovering ultra-pure liquid xenon.

 

In a measurement experiment like the one for dark matter, there is zero toleration for radioactive background noise. “Krypton is one of xenon’s natural contaminants, and it is the original source of xenon after cryogenic distillation produces 94% krypton and 6% xenon,” the researcher explains. However, the isotope krypton-85 is created by human activities. In the XENON1T experiment, we start with a few ppm (parts per million) of natural krypton present in the xenon, which is far too much. “All the types of steel used in the instrument are selected and measured before they come in contact with the liquid xenon,” the researcher adds, explaining that in this instance they obtained the lowest measurement of background noise using an experimental device.

The first promising results will be published in early 2018, and the next stages are already taking shape. The experiment that will start in 2019, XENONnT, which will use 60% of the equipment from XENON1T, aims to achieve even greater precision. Competition is fierce with the LZ teams in the USA and PandaX teams in China. “We can’t let anyone get ahead of us in this complicated quest in which, for the first time, we want to observe something new,” Dominique Thers emphasizes. He estimates that, all told, 50 to 100 tons of extra-pure xenon will be needed to refute the possible presence of observable dark matter or, on the contrary, measure its mass, describe its properties and identify possible applications.

Xenon cameras in oncology

When working with this type of trans-generational research, parallel research within shorter time frames must be carried out. This is especially true in France, where the research structure makes it difficult to fully commit to instrumentation activities.  Budgets for funding applied research are hard to come by, and researchers also devote time to teaching activities. It would be a shame if so much expertise developed over time failed to make a groundbreaking discovery due to a lack of funds or time.

To avoid this fate, Dominique Thers and his team have succeeded in creating a virtuous circle. “We’ve been quite lucky,” the researcher says with a smile. “We have been able to develop local activities with a community that also needs to make advancements that could be made possible through medical imaging using liquid xenon.” At the university hospital (CHU) in Nantes there is a leading team of specialists in cancer therapy and engineering who understand the advantages xenon cameras represent. The cancer specialists’ objective is to provide patients with better support, and better understand each patient’s response to treatment. In the context of an ongoing State-Regional Planning Contract (CPER), the scientist convinced them to invest in this technology, “because with a new instrument, anything is possible.”

The current PET (Positron-emission tomography) imaging techniques use solid-state cameras that require rare-earth elements, and only a dozen patients a day are effectively screened using this technology. Xenon cameras, which use Compton imaging, the only technique that can trace the trajectory of a single photon, uses triangulation methods to ensure the 3D localization of the areas where the medicine has been applied. The level of precision is therefore improved and opens the way to possible benefits to treat more patients daily or monitor the progress of the treatment more regularly. The installation at the CHU in Nantes is scheduled for 2018, initially for tests on animals before 2020. This should convince manufacturers to make a camera adapted to producing an image of the entire human body, which would also undoubtedly require several million euros of investments, but this time with a potential market of several billion euros.

Just like the wave–particle duality so treasured by physicists, Dominque Thers and his team have two simultaneous facets. “They could have a short-term impact on society, while at the same time opening new perspectives in our understanding of the universe,” the scientist explains.

[author title=”Pushing the limits of nature” image=”https://imtech-test.imt.fr/wp-content/uploads/2018/02/Portrait_réduit.jpg”]Dominique Thers believes he “fell into research accidentally”. As someone who enjoyed the hard sciences and mathematics, he met researchers during an internship in astronomy and particle physics. “The human side convinced me to give it a try,” and he began working on a thesis with Georges Charpak, which they defended in 2000. He joined IMT Atlantique (formerly Mines Nantes) in 2001, and since 2009 he has been in charge of the Xenon team which is part of the Subatech department, a Mixed Research Unit (UMR) with the University of Nantes and the CNRS. This mixed aspect is also present in the cultural diversity of the PhD and post doctorate students that come from all over the globe. The researcher’s motivation is whole-hearted: “It’s wonderful to be exposed to the limits of nature. Nature prevented us from going any further, and our instruments are going to allow us to cross this border. “The young researchers, who are exposed to scientific culture, perceive these limits and are drawn to the leading teams in this field. Dominique Thers is also an entrepreneur; in 2012, with three PhD students, he founded the AI4R startup specialized in medical instrumentation.[/author]

physical rehabilitation

A robot and supervised learning algorithms for physical rehabilitation

What if robotics and machine learning could help ease your back pain? The KERAAL project, led by IMT Atlantique, is working to design a humanoid robot that could help patients with lower back pain do their rehabilitation exercises at home. Thanks to supervised learning algorithms, the robot can show the patient the right movements and correct their errors in real time.

 

Lower back pain, a pathology primarily caused by aging and a lack of physical activity, affects a large majority of the population. To treat this pain, patients need rehabilitation from a physical therapist, and they must perform the prescribed exercises on their own on a daily basis. For most patients, this second step is not carried out very diligently, leading to real consequences for their health. How can they receive personalized assistance and stay motivated to perform the rehabilitation exercises long-term?

The KERAAL project, funded by the European Union under the Echord++ project, led by IMT Atlantique in partnership with Génération Robot and CHRU de Brest, has developed a humanoid robot capable of showing physical therapy exercises to a patient and correcting the patient’s errors in real time. The researchers have used a co-design approach, working with physical therapists and psychologists to define the most relevant exercises to be implemented, and be as specific as possible in defining the robot’s gestures and verbal instructions, and study how the robot is received by patients and therapists.

With a coach at home, patients have a physical presence and moral and emotional support that encourages them to correctly pursue their rehabilitation,” explains Mai Nguyen, project coordinator and researcher at the Computer Science Department at IMT Atlantique.  “The robot offers a way to monitor the patient’s performance of daily repetitive exercises, a task that is tiresome for therapists. At the same time, it prevents patients from having to make daily trips to the rehabilitation center.

 

A supervised learning algorithm that corrects the patient’s movements in real time

In 2014, the researchers began tests with the Nao robot, developed by SoftBank Robotics. “We found that Nao did not have enough joints to allow him to reproduce the rehabilitation exercises,” Mai Nguyen explains. “This is why we finally chose to work with the Poppy robot, which has a backbone. Therefore, he can move his back, which is better adapted to the treatment of lower back pain.

The small humanoid robot is equipped with a 3D camera and algorithms capable of extracting the “skeleton” of the person being filmed and detect their movements. The IMT Atlantique team worked on a supervised learning algorithm capable of analyzing the movement of the patient’s “skeleton” by comparing it to the demonstrations of the exercises previously shown to the robot by the health professional.  “The algorithm we are working on will determine the common features between the physical therapist’s different demonstrations, and will identify which variations he must reject,” Mai Nguyen explains. “There are some differences in execution that are acceptable. For example, if the exercise focuses on the arm muscles, the position of the feet is not important, whereas in the movement of the shoulders, every detail counts. The same level of precision is not required for every body part at all times.

The robot has a list of common errors in the physical therapy movements that have been previously identified and are associated with specific instructions for the patient. If the robot detects an error in the execution of the exercise, it will be able to verbally communicate with the patient while performing the correct movement. “Our goal was to create an interactive system that could respond to the patient in real time, without requesting the physical therapist’s assistance,” Mai Nguyen explains. “The data from the robot could then be used by the caregivers for more thorough follow-up.

 

A friendly and motivating presence

For the time being, what we have observed is that the system is functional. The initial tests show that the robot is able to perform the ongoing monitoring of patients, but we are awaiting the end of the clinical tests to come to a conclusion on the robot’s effectiveness in terms of motivation and rehabilitation as well as on the patients’ and physical therapists’ experiences,” Mai Nguyen explains.

As part of the co-design approach, Poppy was subject to a pre-experiment phase during the initial development of the project with five senior citizens who seldom use technology. Following the robot-mediated physical exercise sessions, the psychologists interviewed the participants. The goal was to understand how they perceived the machine, and if they had correctly understood its movements. “Before the session, the subjects were very apprehensive about the idea of working with a robot, but Poppy was perceived very positively, and provided a friendly dimension that was very appreciated,” Mai Nguyen explains. “The subjects were very motivated to do their exercises right!” Tests have been carried out with six patients suffering from lower back pain at the CHRU in Brest and at the rehabilitation center in Perharidy.

For the time being, all experiments have been carried out in a hospital setting. But the researchers’ long-term goal is to propose a robot that the patient can take home, with a program of personalized exercises implemented by the physical therapist. “We are trying to develop a system that is as lightweight as possible, with only one camera as a sensor,” Mai Nguyen explains. The researchers have also launched a study of the business model with the perspective of the potential industrial production of this physical therapist robot.

The project had very theoretical roots, but its completion is becoming more and more concrete!” Mai Nguyen explains. “We believe that, in a few years, this solution may be available on the market, bringing real advances in patient care.

 

The work presented here was partially funded by the European project EU FP-7 ECHORD++ KERAAL, by the CPER VITAAL project funded by FEDER, and by the RoKINter project by UBO.