Acklio

Acklio: linking connected objects to the internet

With the phenomenal growth connected objects are experiencing, networks to support them have become a crucial underlying issue. Networks called “LPWAN” provide long-range communication and energy efficiency, making them perfectly suited to the Internet of Things, and are set to become standards. But first, they must be successfully integrated within traditional internet networks. This is precisely the mission of the very promising start-up, Acklio. This start-up developed at IMT Atlantique was a finalist for the Bercy-IMT Innovation Awards and will attend CES 2019 from 8 to 11 January.

 

How many will there be in 2020? 2 billion? 30 billion? Maybe even 80 billion? Although estimates of the number of connected objects that will exist in five years vary by a factor of four depending on which consulting firm or think tank you ask, one thing seems certain: the amount of objects will be a number with nine or more zeros. All these communications must be ensured to connect these objects to the internet in order to exchange data with the cloud, our email accounts or smartphone applications.

But connected objects are not like computers: they do not have fiber optic connections, and few of them use WiFi to communicate. The Internet of Things relies on specific radio networks called LPWAN—the best-known examples of which are LoRa and Sigfox. One of the major challenges in deploying the IoT is therefore to successfully ensure rapid, efficient data transfer between LPWAN networks and the internet. This is precisely the aim of Acklio, a start-up founded by two IMT Atlantique researchers: Laurent Toutain and Alexander Pelov.

Alexander Pelov explains why industrial players are interested in LPWAN networks, “Using just 3 AAA batteries, we can now power a connected gas meter that will transmit one message per day for a period of 20 years. These networks are extremely energy-efficient and make it possible to reduce the cost of communications.” From GPS tracking of objects, animals and people to logistics, alarm systems and more, all industries that wish to make use of connected objects will rely on these networks.

For Alexander Pelov, however, this poses a problem. “Depending on whether we choose the LoRa or Sigfox technology to set up the LPWAN network for the connected objects, a different approach will be used. The developers won’t work in exactly the same way, for example,” he explains. So it would be impossible to scale up in terms of infrastructure or environment to deploy multiple connected objects. It would also be difficult to ensure fluid data transfer between the LPWAN networks and the internet if each network is different. In other words, this represents a major hurdle in the development of IoT.

To overcome this obstacle, Acklio’s team integrates basic LPWAN protocols in standard internet protocols—like IPv6. Alexander Pelov sums up his start-up’s approach as follows, “We define a generic architecture and add it at the server level, which controls the connected objects. Then, we send messages from these objects to the internet and vice versa via this architecture.” Acklio’s technological building block thus acts as an intermediary in the transmission of data from one environment to another.

It is based on the principle of data compression and fragmentation. The role of the technology is first of all to compress the header in a data packet using a mechanism called SCHC —static context header compression. This is a crucial step for providing internet connectivity within the LPWAN network. Since compression is impossible at times, or may produce data packets that are still too large for the LPWAN network, Acklio also makes it possible to fragment the Ipv6 data packets. This two-in-one technology will allow developers to work without worrying about which LPWAN technology is used for the IoT application they are developing.

Acklio, an important player in IoT standardization

The young start-up’s work is so promising that it has been commissioned to coordinate efforts to standardize connectivity between LPWAN networks and the internet. Acklio is leading a working group within the IETF—an organization that is actively involved in developing internet standards—which brings together the IEEE, the 3GPP cooperation for telecommunications standards in Europe, and alliances for the standardization of LoRa and Sigfox technologies (including LoRa Alliance members Bouygues Telecom and Orange for example).

In all, more than 200 industry players are represented in the IETF, not counting academic institutions. “It’s an organization where researchers and engineers can talk about operational needs, technical constraints and scientific challenges without engaging in business lobbying,” says Marianne Laurent, Head of Marketing director for the start-up. In 2018, the IETF recognized Acklio’s technology as a standard. A sign of success and the start-up’s high-quality work, this has also created an opening for the technology and therefore, for competition for the young company.

However, Acklio will be able to count on its head start in developing its compression-fragmentation technique. For now, it is still the only one of its kind, and will enter the market with two products which it will present at the Las Vegas CES 2019 in January. This could be the occasion for the start-up to continue its winning streak for awards, starting with an interest-free loan from Fondation Mines-Télécom in 2016 and continuing with a Best Telecommunication Innovation Award at the 2018 Mobile World Congress in March of last year. Most importantly, the American event will also provide an opportunity to find new customers. Acklio is on track to become a shining example of researchers succeeding in the entrepreneurial world and of the direct commercialization of fundamental research in telecommunications.

 

[divider style=”normal” top=”20″ bottom=”20″]

LPWAN: networks suited for connected objects

Alexander Pelov illustrates the performance of LPWAN networks through a use case carried out with the city of Rennes to control its electrical grid. “With only two LWPAN base stations, it is possible to cover 95% of the Rennes urban area.” This high level of performance does come with some drawbacks: the networks are slow and only a few messages can be sent per day by the objects connected to these networks. The two base stations support a daily traffic of one hundred 12-byte messages. But the sensors do not usually need to send much information to the server or to do so quickly. That is why these long-range networks have already become the foundation for communications between connected objects.

[divider style=”normal” top=”20″ bottom=”20″]

 

computer viruses

Hospitals facing a different kind of infection: computer viruses

Hervé Debar, Télécom SudParis – Institut Mines-Télécom

[divider style=”normal” top=”20″ bottom=”20″]

[dropcap]W[/dropcap]annaCry was the first case of a cyberattack that had a major effect on hospitals. The increasing digitization of hospitals (like all areas of society) offers significant opportunities for reducing the cost of health care while making the care provided more effective. However, with digitization comes cybersecurity challenges and these threats must be taken into account in implementing e-health solutions.

The hospital: a highly digitized environment

The medical world — and especially hospitals — is a highly digitized environment. This reality first began with management tasks (human resources, room management, planning, etc.) and over the past few years it has grown to include medical equipment (radiology, imaging). Two significant developments have occurred:

  • An increasing number of objects are used in hospitals to collect data or administer medication. This is what is referred to as the Internet of Medical Things (IoMT). The nature of these often-inexpensive objects represents a break with the professional management of conventional medical platforms.
  • More and more of these objects are used outside the hospital, by individuals who are not properly trained to use them. Some of these uncontrolled devices, such as our smartphones, can enter the hospital and interact with medical processes.

From a technical perspective, we are undeniably becoming increasingly dependent on a high-quality digital infrastructure to provide us with quality medical care. This directly affects not just the care provided but also all the related processes (planning, insurance, reimbursement of fees, logistics, etc.). It is particularly difficult to ensure security in these areas, since the conventional development and management technology in information systems is also vulnerable to these attacks. Furthermore, technological advances are based on the increased ability to share, analyze and disseminate information. The number of vulnerabilities is therefore likely to remain high.

From an economic perspective, the rise in healthcare costs is unavoidable. Increased operational efficiency, made possible by computerization, is one of the measures used to prevent costs from rising too high. It is therefore imperative to keep the impacts of cyberattacks in hospital environments to a minimum.

From a legal perspective, the implementation of European personal data protection regulations (GDPR) and the cybersecurity for operators of critical infrastructures (NIS) are imposing new obligations for everyone.

Hospitals are the perfect example of the use of extremely sensitive data demanding confidentiality, integrity (accuracy) and availability (access) to provide care and ensure medical records are properly managed. A medical record is a summary of sensitive, correlated information with separate subsets with varying levels of interest.

A poorly protected environment

Over the past few years there have been cyberattacks that have affected hospital operations. We should note that in many cases, hospitals are just one of the targets of these attacks, since many other organizations are also impacted.

Wannacry is a computer worm that exploits a breakdown in Windows protocol that allows printers and files to be shared. This protocol is used by medical imaging equipment to transfer an image file from a scanner to computers and is used by doctors who meet with patients to make a diagnosis. When imaging equipment is infected by Wannacry through this network protocol, it becomes inoperable, preventing operations and hence endangering patients’ lives.

More generally, much of the medical equipment relies on aging operating systems and old protocol. It is therefore crucial that manufacturers of this equipment become aware of this issue.

The effectiveness of a medical procedure increasingly relies on the ability to connect various tools used by medical staff for the purpose of transferring data (images, prescriptions, etc.) and interacting. Therefore, it is not possible to consider isolating these pieces of equipment. More rigorous access controls must therefore be implemented (which is generally a challenge for organizations, as demonstrated in the study by Deloitte called “Future of Cyber”).

An attack on pacemakers

In addition to the Wannacry incident, it is also necessary to reflect on the communications between medical objects and information systems. Several examples have recently demonstrated the vulnerability of medical objects.

Implants, such as insulin pumps and pacemakers, are vulnerable to computer attacks. Communications between these objects are neither encrypted nor authenticated, meaning that they could be listened to for the purpose of extracting sensitive data. This also means they can receive commands allowing them to be controlled, creating all types of imaginable consequences through changes in their operations.

Other routine medical equipment, like infusion pumps, are also vulnerable to attacks.

New attacks in sight

So far, the attacks that have been revealed have had two main consequences. The first is a denial of service, or the inability to use medical equipment when it is needed and all the potential consequences this entails. Since it is difficult to prevent denial of service attacks, measures must be taken to limit their effects.

The second result is the leak of potentially sensitive information. This leak of information involves the risk of data being added to other databases, for example as data sources for the validation of creditworthiness, used by banks in their decisions to grant or refuse bank loans. This would represent a major setback in protecting our personal data.

We do not have any clear examples of data being falsified, which could be the next step taken by attackers. Data falsification could lead to erroneous prescriptions and therefore to drug diversion. This diversion would allow the author of the crime to receive an immediate profit, which fits with current trends.

What are the solutions?

The first solutions that come to mind are technological ones. Such new solutions do indeed exist which could improve computer security in medical environments.

  • blockchain. This technology can significantly improve data protection by separating the data according to purpose (medical, clerical, insurance, etc.) and by protecting each piece of data individually. It can also log access to manage emergency situations. Current technology is too energy-intensive and must be changed to become more acceptable.
  • Virtualization and cloudification. Outsourcing computer services professionalizes the management of an organization’s digital activities. The scarcity of human resources trained in cybersecurity makes it necessary to rely on external means. The development of cloud services, particularly the concept of a sovereign cloud, must be done in a way that complies with current regulations, particularly the famous GDPR.
  • By Design. Manufacturers of medical objects, software and platforms must take cybersecurity into account during the design phase for their equipment as well as integrating it into the life cycle. This is a major revolution that cannot be carried out in a day. It is therefore necessary to continue protecting older equipment whose initial cost justifies its continued use for decades to come. This is also a revolution for the IT world, which now counts the life span of its software and services in terms of months. While awareness in the area is growing in the industrial world, it must also increase in the medical world.

All these new forms of technology, and others not mentioned here, will never be effective unless the human factor is first taken into account in the hospital, among caregivers, but also patients and visitors. This remains the key to a successful digital transformation of the hospital.

Medical objects must be adapted to their users, generally patients. Besides gadgets like connected watches, better solutions must be found for all objects to make them simpler and easier to use. Confidence in these objects is fundamental and cybersecurity incidents that could restrict their use must be avoided at all costs.

Finally, the role of medical professionals is absolutely fundamental. They must accept the presence of computer technology and recognize that it can make their work easier on a daily basis rather than representing a hindrance. Medical staff must take an interest in cybersecurity issues, receive training in this area and urge suppliers to develop tools adapted to their needs.

[divider style=”normal” top=”20″ bottom=”20″]

Hervé Debar, Head of the Telecoms Networks and Services Department at Télécom SudParis – Institut Mines-Télécom, Université Paris-Saclay

The original version of this article (in French) has been published on The Conversation.

See all articles by Hervé Debar on I’MTech

ANIMATAS

Robots teaching assistants

Projets européens H2020The H2020 ANIMATAS project, launched in January 2018 for a four-year period, seeks to introduce robots with social skills in schools to assist teaching staff. Chloé Clavel, a researcher in affective computing at Télécom ParisTech, one of the project’s academic partners, answered our questions on her research in artificial intelligence for the ANIMATAS project.

 

What is the overall focus of the European project ANIMATAS?

The ANIMATAS project focuses on an exciting application of research in artificial intelligence related to human-agent interactions: education in schools. The project also contributes to other work and research being carried out to integrate new technologies into new educational methods for schools.

The project’s objectives are focused on research in affective computing and social robotics. More specifically, the project aims to develop computational models to provide the robots and virtual characters with social skills in the context of interactions with children and teachers in the school setting.

What are the main issues you are facing?

We are working on how we can incorporate robots to allow children to learn a variety of skills, such as computational thinking or social skills. The first issue concerns the robot or virtual character’s role in this learning context and in the pre-existing interactions between children and teachers (for example, counselors, colleagues or partners in the context of a game).

Another important issue relates to the capacity of the underlying computational models responsible for the robots’ behavior to adapt to a variety of situations and different children. The objective is for the robot to be attentive to its environment and remain active in its learning while interacting with children.

Finally, there are significant ethical issues involved in the experiments and the development of computational models in accordance with European recommendations. These issues are handled by the ethics committee.

Who else is involved with you in this project, and what are the important types of collaboration in your research?

We have partners from major European academic laboratories. The majority are researchers in the field of affective computing, but also include researchers in educational technologies from the École Polytechnique Fédérale de Lausanne (EPFL), with whom we are working on the previously mentioned issue of the robot’s role in the learning process.

Three researchers from Télécom ParisTech are involved in this project: Giovanna Varni and myself, from the Images, Data and Signal Department and Nicolas Rollet, from the Economic and Social Sciences Department.

ANIMATAS requires skills in computer science, linguistics, cognitive sciences and pedagogy… What areas are the researchers from Télécom ParisTech contributing to?

Télécom ParisTech is contributing skills in affective computing and computational linguistics. More specifically, my PhD student, Tanvi Dinkar, and I are working on the automatic analysis of disfluencies (for example, hesitations, unfinished words or sentences) as a sign of a child’s emotions or stress in the learning process, or their level of confidence in their skills (feeling of knowledge) in the context of their interactions with other children, the teacher or the robot.

How will the results of ANIMATAS be used?

The project is a research training network, and one of its main objectives is to train PhD students (15 for ANIMATAS) in research as well as uniting work around affective computing and social robotics for education.

The project also includes unfunded partners such as companies in the field of robotics and major US laboratories such as ICT (Institute of Creative Technologies) and USC (University of Southern California) who provide us with regular feedback on scientific advances made by ANIMATAS and their industrial potential.

A workshop aimed at promoting ANIMATAS research among industrialists will be organized in September 2020 at Télécom ParisTech.

What is the next step in this project?

The kick-off of the project took place in February 2018. We are currently working with schools and educational partners to define interaction scenarios and learning tasks to collect the data we will use to develop our social interaction models.

Read more on I’MTech:

emotions

From human feelings to digital emotions

Making powerful machines is no longer enough. They must also connect with humans socially and emotionally. This imperative to increase the efficiency of human-computer interactions has created a new field of research: social computing. This field is aimed at understanding, modeling and reproducing human emotions. But how can an emotion be extracted and then reproduced, based only on a vocal track, video or text? This is the complexity of the research Chloé Clavel is working on at Télécom ParisTech.

 

All those moments will be lost in time, like tears in rain.” We are in Los Angeles in 2019, and Roy Batty utters these words, knowing he has only seconds left to live. Melancholy, sadness, regret… Many different feelings fill this famous scene from the 1982 cult film Blade Runner by Ridley Scott. There would not be anything very surprising about these words, if it were not for the fact that Roy Batty is a replicant: an anthropomorphic machine.

In reality, in 2018, there is little chance of us seeing a humanoid robot walking the streets next year, capable of developing such complex emotions. Yet, there is a trend towards equipping our machines to create emotional and social connections with humans. In 1995, this led to the creation of a new field of research called affective computing. Today, it has brought about sub-disciplines such as social computing.

These fields of research involve two aspects,” explains Chloé Clavel, a researcher in the field at Télécom ParisTech. “The first is the automatic analysis of our social behaviors, interactions and emotions. The second is our work to model these behaviors, simulate them and integrate them into machines.” The objective: promote common ground and produce similarities to engage the user. Human-computer interaction would then become more natural and less frustrating for users who sometimes regret not having another human to interact with, who would better understand their position and desires.

Achieving this result first requires understanding how we communicate our emotions to others. Researchers in affective computing are working to accomplish this by analyzing different modes of human expression. They are interested in the way we share a feeling in writing on the internet, whether it be on blogs, in reviews on websites or on social networks. They are also studying the acoustic content of the emotions we communicate through speech such as pitch, speed and melody of voice, as well as the physical posture we adopt, our facial expressions and gestures.

The transition from signals to behaviors

All this data is communicated through signals such as a series of words, the frequency of a voice and the movement of points on a video. “The difficulty we face is transitioning from this low-level information to rich information related to social and emotional behavior” explains Chloé Clavel. In other words, what variation in a tone of voice is characteristic of fear? Or what semantic choice is used in speech to reflect satisfaction? This transition is a complex one because it is subjective.

The Télécom ParisTech researcher uses the example of voice analysis to explain this subjectivity criterion. “Each individual has a different way of expressing their social attitudes through speech, therefore large volumes of data must be used to develop models which integrate this diversity.” For example, dominant people generally express themselves with a deeper voice. To verify and model this tendency, multiple recordings are required, and several third parties must validate the various audio excerpts. “The concept of a dominant attitude varies from one person to another. Several annotations are therefore required for the recordings to avoid bias in the interpretation,” Chloé Clavel explains.

The same is true in the analysis of comments on online platforms. The researchers use a corpus of texts annotated by external individuals. “We collect several annotations for a single piece of text data,” the researcher explains. Scientists provide the framework for these annotations using guides based on literature in sociology and psychology. “This helps us ensure the annotations focus on the emotional aspects and makes it easier to reach a consensus from several annotations.” Machine learning methods are then used, without introducing any linguistic expertise into the algorithms first. This provides classifications of emotional signals that are as unbiased as possible, which can be used to identify semantic structures that characterize discontent or satisfaction.

Emotions for mediation

Beyond the binary categorization of an opinion—as positive or negative—one of the researchers’ greatest tasks is to determine the purpose and detailed nature of this opinion. Chloé Clavel led a project on users’ interactions with a chatbot. The goal was to determine the source of a user’s negative criticism, whether it was caused by the chatbot itself being unable to answer the user correctly, by the interaction, for example the unsuitable format of the interface, or by the user who might simply be in a bad mood. For this project, which benefited from virtual assistance from EDF, the semantic details in messages written to the chatbot had to be examined. “For example, the word ‘power’ does not have the same connotation when someone refers to contract power with EDF as it does when used to refer to the graphics power of a video game,” explains Chloé Clavel. “To gain an in-depth understanding of opinions, we must disambiguate each word based on the context.

Read more on I’MTech Coming soon: new ways to interact with machines

The chatbot example does not only illustrate the difficulty involved in understanding the nature and context of an opinion, but it also offers a good example of the value of this type of research for the end user. If the machine is able to understand the reasons why the human it is interacting with is frustrated, it will have a better chance of adapting to provide its services in the best conditions. If the cause is the user being in a bad mood, the chatbot can respond with a humorous or soothing tone. If the problem is cause by the interaction, the chatbot can determine when it is best to refer the user to a human operator.

Recognizing emotions and the machine’s ability to react in a social manner therefore allows it to play a conciliatory role. This aspect of affective computing was used in the H2020 Animatas project, in which Télécom ParisTech has been involved since 2018 and will continue for four years. “The goal is to introduce robots in schools to assist teachers and manage the social interactions with students,” Chloé Clavel explains. The idea is to provide robots with social skills to help promote the child’s learning. The robot could therefore offer each student personalized assistance during class to support the teacher’s lessons. Far from the imaginary humanoid robot hidden among humans, an educational mediator could improve learning for children.

 

care pathway

When AI helps predict a patient’s care pathway

Researchers at Mines Saint Etienne are using process mining tools to attempt to describe typical care pathways for patients with a given disease. These models can be used to help doctors predict the next steps for treatment or how a disease will progress.

 

Will doctors soon be able to anticipate patient complications arising from a disease? Will they be able to determine an entire care pathway in advance for patients with a specific disease? These are the goals of Vincent Augusto and his team at Mines Saint-Étienne. “Based on a patient’s treatment records, their condition at a given moment, and care pathways of similar patients, we’re trying to predict what the next steps will be for the patients,” says Hugo De Oliveira, a PhD student in Health Systems Engineering whose CIFRE thesis is funded by HEVA, a company based in Lyon.

Anticipating how a disease will progress and treatment steps helps limit risks to which the patient is exposed. For people with diabetes — the example used by the researchers in their work — the process is based on detecting weak signals that are precursors of complications as early as possible. For a given patient, the analysis would focus on several years of treatment records and a comparison with other diabetic patients. This would make it possible to determine the patient’s risk of developing renal failure or requiring an amputation related to diabetes.

In order to predict these progressions, the researchers do not rely on personal medical data, such as X-rays or biological analyses. They use medico-administrative data from the national health data system (SNDS). “In 2006, activity-based pricing was put into place,” notes Hugo De Oliveira. With this shift in the principle of funding for healthcare institutions, a large database was created to provide hospitals with the necessary information for reimbursement of treatment. “It’s a very useful database for us, because each line collects information about a patient’s stay: age, sex, care received, primary diagnosis, associated pathologies from which they suffer etc,” says the young researcher.

An entire pathway in one graph

Vincent Augusto’s team is developing algorithms that analyze these large volumes of data. Patients are sorted and put into groups with similar criteria. Different care pathway categories can then be established, each of which groups together several thousands of similar pathways (similar patients, identical complications etc.). In one category — diabetic patients who have undergone amputation for example — the algorithm analyzes all of the steps for the entire group of patients in order to deduce which ones are most characteristic. A graph is produced to represent the typical pathway for this category of patient. It may then be used as a reference to find out whether a patient in the early stages of the disease is following similar steps, and to determine the probability that he/she belongs to this category.

This graph represents the care pathway for patients monitored over an 8-year period who have had a cardiac defibrillator implanted. The part before the implantation can be used to establish statistics for the steps preceding the procedure. The part after the implantation provides information about the future of patients following the implantation.

 

In this way, the researchers are working on developing longitudinal graphs: each treatment step represents a point on the graph, and the whole graph can be read chronologically: “Doctors can read the graph very easily and determine where the patient is situated in the sequence of steps that characterize his/her pathway,” explains Hugo De Oliveira. The difficulty with this type of data representation comes from its comprehensiveness: “We have to find a way to fit an entire patient pathway into a single line,” says the PhD student. In order to do so, the team chose to use process mining, a data mining and knowledge extraction tool. Machine learning is another such tool.

Process mining helps make care pathway descriptions more effective and easier to read, but it also provides another benefit: it is not a ‘black box’. This characteristic is often encountered in neural network type algorithms. Such algorithms are effective at processing data, but it is impossible to understand the processes that led to the results of the algorithm. Unlike these algorithms, the process mining algorithms used to predict treatment pathways are transparent. “When a patient is characterized by a type of graph, we’re able to understand why by looking at past treatment steps, and studying each graph for the patient’s categories to understand how the algorithm evaluated the pathway,” says Hugo De Oliveira.

Making artificial intelligence applications more transparent is one of the issues brought forth by the working group that produced a French report on AI led by Cédric Villani. The project is also in keeping with the objectives set by the mathematician and member of the French parliament to facilitate AI experimentation for applications, for healthcare in particular. “Our research directly benefits from policies for opening access to health data,” says the PhD student. This access will continue to open up for the researchers, since later on this year they will be able to use the database of the national health insurance cross-scheme system (SNIIRAM): the 1.2 billion healthcare forms contained in the system will be used to improve the algorithms and better identify patient treatment pathways.

 

Canaries were once used in coal mines to forewarn of impending firedamp explosions. This story has inspired a cyberdefense tool : stack canaries.

Stack canaries: overestimating software protection

Android, Windows, Mac, Linux… All operating systems contain stack canaries — one of the most common forms of software protection. These safeguards that protect computer systems from intrusions are perceived as very effective. Yet, recent research carried out by EURECOM and the Technical University of Munich show that most stack canaries contain vulnerabilities. The results obtained through a project led by the German-French Academy for the Industry of the Future highlight the fragility of computer systems in the context of increasingly digitized organizations.

 

During the 19th century, canaries were used in coal mines to forewarn of impending firedamp explosions. The flammable, odorless gas released by the miners’ activities caused the birds either to lose consciousness or to die. This alerted the workers that something was wrong. Several decades later, in the early 2000s, researchers in cybersecurity were inspired by the story of canaries in coal mines. They invented a simple protection system for detecting software corruption—calling it “stack canary”. Since then, it has become one of the most common protection systems in the software we use and is now present in almost all operating systems. But is it really effective?

Perhaps it seems strange to be asking this question over 20 years after the first stack canaries were used in computer products. “The community assumed that the protection worked,” explains Aurélien Francillon, a researcher in cybersecurity at EURECOM. “There was some research revealing potential vulnerabilities of stack canaries, but without any in-depth investigation into the issue.” Researchers from EURECOM and the Technical University of Munich (TUM) have therefore partnered together to remedy this lack of knowledge. They assessed the vulnerabilities of stack canaries in 17 different combinations of 6 operating systems, to detect potential defects and determine good practices to adopt to remedy the situations. Linux, Windows 10, macOS Sierra and Android 7.0 were all included in the studies.

We showed that, in the majority of operating systems, these countermeasures for detecting defects are not very secure,” Aurélien Francillon explains. 8 out of the 17 tested combinations are qualified by the researchers as using an inefficient stack canary (see table below). 6 others can be improved, and the last 3 are blameless. This study of the vulnerabilities of stack canaries, carried out in the context of the Secure connected industry of the future (SeCIF) project, part of the German-French Academy for the Industry of the Future, is linked to the growing digital component of organizations. Industries and companies are increasingly reliant on connected objects and IT processes. Defects in the protection devices for operating systems can therefore endanger companies’ overall security, whether it be access to confidential data or gaining control of industrial machinery.

Out of the 17 operating systems tested, only Android 7.0 “Nougat”, macOS 10.12.1 “Sierra”, and OpenBSD 6.0 (Unix) had completely secure stack canaries. A red cross means that it is possible to bypass the stack canary in the given combination. An orange cross mean that stack canary security can be improved. Table columns are different memory types from a programming logic standpoint.

Out of the 17 operating systems tested, only Android 7.0 “Nougat”, macOS 10.12.1 “Sierra”, and OpenBSD 6.0 (Unix) had completely secure stack canaries. A red cross means that it is possible to bypass the stack canary in the given combination. An orange cross mean that stack canary security can be improved. Table columns are different memory types from a programming logic standpoint.

The canary in the memory

To understand the impacts of the defects revealed by this research, it is important to first understand why stack canaries are used and how they work. Many attacks that occur are aimed at changing values in a program that are not meant to be changed. The values are stored in memory space. “Let’s say this space has a capacity of 20 bytes,” says Julian Kirsch, a cybersecurity researcher at TUM and co-author of this study. “I would store my name and height on 20 of these bytes. Then, on another space located just behind it, I would store my bank account number. If a hacker wants to corrupt this information, he will add values, for example by adding a number to the value for my height. By doing this, my height data will overflow from the 20-byte space to the space where my bank account number is stored, and the information will no longer be correct. When the program needs to read and use this data, things will go wrong.”

In more complex cases for operating systems, the consequences include more critical errors than the wrong bank account number. To determine whether the information stored in the memory was altered, a known numerical value can be inserted between the storage spaces, as a type of memory buffer. If a hacker adds information, like in Julian Kirsch’s example in which the height was changed, everything will shift, and the value indicated in the memory buffer will change. The stack canary is simply a memory buffer. If the stack canary’s security is compromised, the hacker can modify it and then hide it by resetting it to the initial value.

To make the hacker’s work more difficult, the value of most stack canaries is changed regularly. A copy of the new value is stored in another memory space and both values, the real one and the reference one, are compared to ensure the integrity of the software. In their work, the researchers showed that the vulnerabilities of stack canaries are primarily linked to the place where this reference value is stored. “Sometimes it is stored in a memory space located right next to the stack canary,” Julian Kirsch explains. The hacker therefore does not need to access another part of the system and can change both values at the same time. “This is a defect we see in Linux, for example, which really surprised us because this operating system is widely used,” the TUM researcher explains.

How can such commonly used protection systems be so vulnerable on operating systems like Linux and Windows? First of all, Aurélien Francillon reminds us that stack canaries are not the only countermeasures that exist for operating systems. “In general, these are not the only countermeasures used, but stack canaries still represent significant hurdles that hackers must overcome to gain control of the system,” the EURECOM researcher explains. Their vulnerability therefore does not threaten the entire security for operating systems, but it is one less door for hackers to break into.

The second, less technical reason for permissiveness regarding stack canaries is related to developers’ choices. “They do not want to increase the security of these countermeasures, because it would it decrease performance,” Julian Kirsch explains. For software publishers, security is a less competitive argument than the software’s performance. Greater security implies a greater allocation of computing resources for tasks that do not directly respond to the software user’s requests. Still, customers rarely appreciate computer system intrusions. Considering organizations’ growing concerns about cybersecurity issues, we can hope that the software chosen better integrates this aspect. Security could then become a serious argument in the software solution market.

bioDigital

BioDigital, a new technology to combat identity spoofing

I’MTech is dedicating a series of articles to success stories from research partnerships supported by the Télécom & Société Numérique Carnot Institute (TSN), to which Télécom SudParis belongs. The original version of this article was published in French on Télécom SudParis website

[divider style=”normal” top=”20″ bottom=”20″]

Belles histoires, Bouton, CarnotFollowing an 18-month collaboration agreement, Télécom SudParis (a member of the Télécom & Société Numérique Carnot Institute (TSN) and IDEMIA, the global leader in augmented identity, have finalized the design for a contactless biometric reader, based on a patent filed by two Télécom SudParis researchers. The technology transfer to IDEMIA has just been completed.

 

Fingerprint spoof detection

The technology comprises a next-generation biometric fingerprint scanner called BioDigital. It is an effective tool for combating identity spoofing and also provides a solution to difficulties related to the very nature of biometric data through improved recognition of damaged fingerprint surfaces. “The quality of the reconstructed image of the internal fingerprint is what makes our technology truly original,” says Bernadette Dorizzi, Dean of Research at Télécom SudParis.

Télécom SudParis and IDEMIA have worked in close collaboration. The group provided an assessment algorithm and compiled a database for its assessment, which made it possible to demonstrate that BioDigital is able to provide safer and more effective fingerprint matching by also detecting spoofed fingerprints, and has a success rate of nearly 100%.

Subcutaneous fingerprint and sweat pore network recognition

This contactless technology recognizes not only the fingerprint, but also the subcutaneous print and the network of sweat pores. It is based on optical coherence tomography which produces 3D images using light “echoes”. This allows BioDigital to provide access to fingerprints without direct contact with the reader. Along with this innovation, the system also provides an exceptional image reconstruction quality. “By fusing phase and intensity images, we’ve succeeded in obtaining as natural an image as possible,” says Yaneck Gottesman, research professor at Télécom SudParis.

For a certain number of crucial applications such as the protection of critical infrastructures, spoofing attacks are a real issue and it’s a race between hackers and technology developers like IDEMIA. Once this technology is put into production and integrated in our products, it has the potential to put us another step ahead,” adds Jean-Christophe Fondeur, Executive Vice-President for Research & Development at IDEMIA.

 

[divider style=”normal” top=”20″ bottom=”20″]

A guarantee of excellence in partnership-based research since 2006

The Télécom & Société Numérique Carnot Institute (TSN) has been partnering with companies since 2006 to research developments in digital innovations. With over 1,700 researchers and 50 technology platforms, it offers cutting-edge research aimed at meeting the complex technological challenges posed by digital, energy and industrial transitions currently underway in in the French manufacturing industry. It focuses on the following topics: industry of the future, connected objects and networks, sustainable cities, transport, health and safety.

The institute encompasses Télécom ParisTech, IMT Atlantique, Télécom SudParis, Institut Mines-Télécom Business School, Eurecom, Télécom Physique Strasbourg and Télécom Saint-Étienne, École Polytechnique (Lix and CMAP laboratories), Strate École de Design and Femto Engineering.

[divider style=”normal” top=”20″ bottom=”20″]

Pierre Comon

Pierre Comon: from the brain to space, searching for a single solution

Pierre Comon’s research focuses on a subject that is as simple as it is complex: how to find a single solution to a problem. From environment to health and telecommunications, this researcher in information science at GIPSA-Lab is taking on a wide range of issues. Winner of the IMT-Académie des Sciences 2018 Grand Prix, he juggles mathematical abstraction and the practical, scientific reality in the field.

 

When asked to explain what a tensor is, Pierre Comon gives two answers. The first is academic, factual, and rather unattractive despite a hint of vulgarization: “it is a mathematical object that is equivalent to a polynomial with several variables.” The second answer reveals a researcher conscious of the abstract nature of his work, passionate about explaining it and experienced at doing so. “If I want to determine the concentration of a molecule in a sample, or the exact position of a satellite in space, I need a single solution to my mathematical problem. I do not want several possible positions of my satellite or several concentration values, I only want one. Tensors allow me to achieve this.

Tensors are particularly powerful in conditions in which the number of parameters is not particularly high. For example, they cannot be used to find the unknown position of 100 satellites with only 2 antennas. However, when the ratio between the parameters to be determined and the data samples are balanced, they become a very useful tool. There are many applications for tensors, including telecommunications, environment and healthcare.

Pierre Comon recently worked on tensor methods for medical imaging at the GIPSA-Lab* in Grenoble. For patients with epilepsy, one of the major problems is determining the source of the seizures in the brain. This not only makes it possible to treat the disease, but also to potentially prepare for surgery. “When patients have a disease that is too resistant, it is sometimes necessary to perform an ablation,” the researcher explains.

Today, these points are localized using invasive methods: probes are introduced into the patient’s skull to record brainwaves, a stage that is particularly difficult for patients. The goal is therefore to find a way to determine the same parameters using non-invasive techniques, such as electroencephalography and magnetoencephalography. Tensor tools are integrated into the algorithms used to process the brain signals recorded through these methods. “We have obtained promising results,” explains Pierre Comon. Although he admits that invasive methods currently remain more efficient, he also points out that they are older. Research on this topic is still young but has already provided reason to hope that treating certain brain diseases could become less burdensome for patients.

An entire world in one pixel

For environmental applications, on the other hand, results are much less prospective. Over the past decade, Pierre Comon has demonstrated the relevance of using tensors in planetary imaging. In satellite remote sensing, each pixel can cover anywhere from a few square meters to several square kilometers. The elements present in each pixel are therefore very diverse: forests, ice, bodies of water, limestone or granite formations, roads, farm fields, etc. Detecting these different elements can be difficult depending on the resolution. Yet, there is a clear benefit in the ability to automatically determine the number of elements within one pixel. Is it just a forest? Is there a lake or road that runs through this forest? What is the rock type?

The tensor approach answers these questions. It makes it possible to break down pixels by indicating the number of the different components. Better still, it can do this “without using a dictionary, in other words, without knowing ahead of time what elements might be in the pixel,” the researcher explains. This possibility owes to an intrinsic property of tensors, which Pierre Comon has brought to light. In certain mathematical conditions, they can only be broken down one way. In practice, for satellite imaging, a minimum number of variables are required: the intensity received for each pixel, each wavelength and each angle of incidence must be known. Therefore, the unique nature of tensor decomposition makes it possible to retrace the exact proportion of different elements in each image pixel.

For planet Earth, this approach has limited benefits, since the various elements are already well known. However, it could be particularly helpful in monitoring how forests or water supplies develop. On the other hand, the tensor approach is especially useful for other planets in the solar system. “We have tested our algorithms on images of Mars,” says Pierre Comon. “They helped us to detect different types of ice.” For planets that are still very distant and not as well known, the advantage of this “dictionary free” approach is that it helps bring unknown geological features to light. Whereas the human mind tends to compare what it sees with something it is familiar with, the tensor approach offers a neutral description and can help reveal structures with unknown geochemical properties.

The common theme: a single solution

Throughout his career, Pierre Comon has sought to understand how a single solution can be found for mathematical problems. His first major research in this area began in 1989 and focused on blind source separation in telecommunications. How could the mixed signals from two transmitting antennas be separated without knowing where they were located? “Already at that point, it was a matter of finding a single solution,” the researcher recalls. This research led him to develop techniques for analyzing signals and decomposing them into independent parts to determine the source of each one.

The results he proposed in this context during the 1990s had a huge resonance in both the academic world and industry. In 1988, he joined Thales and developed several patents used to analyze satellite signals. His pioneer article on the analysis of independent components has been cited by fellow researchers thousands of times and continues to be used by scientists. According to Pierre Comon, this work formed the foundation for his research topic. “My results at the time allowed us to understand the conditions for the uniqueness of a solution but did not always provide the solution. That required something else.” That “something else” is in part the tensors, which he has demonstrated to be valuable in finding single solutions.

His projects now focus on increasing the number of practical applications of his research. Beyond the environment, telecommunications and brain imaging, his work also involves chemistry and public health. “One of the ideas I am currently very committed to is that of developing an affordable device for quickly determining the levels of toxic molecules in urine,” he explains. This type of device would quickly reveal polycyclic aromatic hydrocarbon contaminations—a category of harmful compounds found in paints. Here again, Pierre Comon must determine certain parameters in order to identify the concentration of pollutants.

*The GIPSA-Lab is a joint research unit of CNRS, Université Grenoble Alpes and Grenoble INP.

[author title=”Pierre Comon: the mathematics of practical problems” image=”https://imtech-test.imt.fr/wp-content/uploads/2018/11/pierre-comon.jpg”]Pierre Comon is known in the international scientific community for his major contributions to signal processing. He became interested in exploring higher order statistics for separating sources very early on, establishing foundational theories for analyzing independent components, which has now become one of the standard tools used for the statistical processing of data. His significant contribution recently included his very original results on tensor factorization.

The applications of Pierre Comon’s contributions are very diverse and include telecommunications, sensor networks, health and environment. All these areas demonstrate the scope and impact of his work. His long industrial history, strong desire for his scientific approach to be grounded in practical problems and his great care in developing algorithms for implementing the obtained results all further demonstrate how strongly Pierre Comon’s qualities resonate with the criteria for the 2018 IMT-Académie des Sciences Grand Prix.[/author]

 

Yelda et OSO-AI Yelda and OSO-AI

Yelda and OSO-AI: new start-ups receive honor loans

On December 6, the Committee for the Digital Fund of the Graduate Schools and Universities Initiative chose two start-ups to receive honor loans: Yelda and OSO-AI. Together, Yelda, a start-up from the incubator IMT Starter, and OSO-AI, from the incubator at IMT Atlantique will receive three honor loans, for a total of €80,000.

These interest-free loans aimed at boosting the development of promising young companies are co-financed by the Fondation Mines-Télécom, the Caisse des Dépôts and Revital’Emploi. This initiative has supported over 84 startups since 2012.

 

[box type=”shadow” align=”” class=”” width=””]

Yelda is developing the first vocal assistant for companies. The start-up’s team—composed of experts in bots, automatic natural language processing, voice management and machine learning—is convinced that chat and vocal interactions will soon replace traditional interfaces. This will revolutionize the way users interact with companies, for both customers and employees. Yelda, a start-up from the incubator IMT Starter, received an honor loan of €40,000. Find out more [/box]

[box type=”shadow” align=”” class=”” width=””]

OSO-AI is already improving quality of life for the hearing impaired. The start-up will soon become the partner of reference in Artificial Intelligence for hearing aids and will invent Augmented Auditory Reality. The start-up, incubated at IMT Atlantique, received an honor loan of €30,000 and another of €10,000. Find out more [/box]

 

migrants

How has digital technology changed migrants’ lives?

Over the past few decades, migrants have become increasingly connected, as have societies in both their home and host countries. The use of new technologies allows them to maintain ties with their home countries while helping them integrate in their new countries. They also play an important role in the process of migration itself. Dana Diminescu, a sociologist at Télécom ParisTech, is exploring this link between migration and digital technology and challenging the traditional image of the uprooted migrant. She explains how new uses have changed migratory processes and migrants’ lives.

 

When did the link between migration and digital technology first appear?

Dana Diminescu: The link really became clear during the migration crisis of 2015. Media coverage highlighted the migrants’ use of smartphones and the public discovered the role telephones play in the migration process. A sort of “technophoria” appeared for refugees. This led to a great number of hackathons being organized to make applications to help immigrants, with varying degrees of success. In reality, the migrants were already connected well before the media hype of 2015. In 2003, I’d already written an epistemological manifesto on the figure of the connected migrant, based on observations dating from the late 1990s.

In 1990, smartphones didn’t exist yet; how were migrants ‘connected’ at that time?

DD: My earliest observation was the use of a mobile phone by a collective of migrants living in a squat. For them, the telephone was a real revolution and an invaluable resource. They used it to develop a network and find contacts. This helped them find jobs and housing, in short, it helped them integrate society. Two years later, those who had been living in the squat had got off the street and the mobile phone played a large role in making this possible.

What has replaced this mobile phone today?

DD: Social media play a very strong role in supporting integration for all migrants, regardless of their home country or cultural capital. One of the first things they do when they get to their country of destination is to use Facebook to find contacts. WhatsApp is also widely used to develop networks. And YouTube helps them learn languages and professional skills.

Dana Diminescu has been studying the link between migrants and new technologies since the late 1990s.

Are digital tools only useful in terms of helping migrants integrate?

DD: No, that’s not all – they also have an immediate, performance-related effect on the migratory process itself. In other words, an individual’s action on social media can lead to almost instantaneous effect on migration movement. A message posted by a migrant showing that he was able to successfully cross the border at a certain place on the Balkan route creates movement. The other migrants will adjust their journey that same day. That’s why we now talk about migration traceability rather than migration movement. Each migrant uses and leaves behind a record of his or her journey. These are the records used in sociology to understand migrants’ choices and actions.

Does the importance of digital technology in migration activity challenge the traditional image of the migrant?

DD: For a long time, humanities research focused on the figure of the uprooted migrant. In this perception of migrants, they are at once absent from their home country, and absent from their destination country since they find it difficult to fit in completely. New technologies have had an impact on this view, because they have made these forms of presence more prominent. Today, migrants can use tools like Skype to see their family and loved ones from afar and instantly. In interviews, migrants tell me, “I don’t have anything to tell them when I go back to see them since I’ve already told them everything on Skype.” As for presence in their destination countries, digital tools play an increasingly important role in access, whether for biometric passports or cards to access work, transport etc. For migrants, the use of these different tools makes their way of life very different to the way it would have been a few years ago, when such access had not yet been digitized. It is now easier for them to exercise their rights.

Does this have an effect on the notion of borders?

DD: Geographical borders don’t have the same meaning they used to. As one migrant explained in his account one day, “They looked for me on the screen, they didn’t find me, I got through.” Borders are now based on our personal data: they’re connected to our date of birth, digital identities, locations. These are the borders migrants have to get around today. That’s why their telephones are confiscated by smugglers so that they can’t be traced, or why they don’t bring their smartphones with them, so that border police won’t be able to force them to open Facebook.

So digital technology can represent a hurdle for migrants?

DD: Since the migrants are connected, they can, of course, be traced. This limiting aspect of digital technology also exists in the uses of new technology in destination countries. Technology has increased the burden of the informal contract between those who leave and those who stay behind. Families expect migrants to be very present. They expect individuals to be available for them at the times they’re used to spending in their company. In interviews, migrants say that it’s a bit like a second job. They don’t want to appear as if they have broken away, they have to check in. At times, this leads to migrants’ lying, saying that they’ve lost their mobile phone or that they don’t have internet access, to free themselves from the burden of this informal contract. In this case, digital technology is seen as a constraint, and at times it can even be highly detrimental to social well-being.

In what sort of situations is this the case?

DD: In refugee camps, we’ve observed practices that cut migrants off from social ties. In Jordan, for example, it’s impossible to send children to get food for their parents. Individuals must identify themselves with a biometric eye scanner and that’s the only way for them to receive their rations. If they can’t send their children, they can’t send their friends or neighbors either. There is a sort of destruction of the social fabric and support networks. Normal community relationships become impossible for these refugees. In a way, these technologies have given rise to new forms of discrimination.

Does this mean we must remain cautious?

DD: We must be wary of digital solutionism. We conducted a research project with Simplon on websites that provide assistance for migrants. A hundred sites were listed. We found that for the most part, the sites were either not usable or not finalized — and when they are, they’re rarely used. Migrants still prefer using social media over specific digital tools. For example, they would still rather learn a language with Google Translate than use a language learning application. They realize that they need certain things to facilitate their learning and integration process. It’s just that the tools that have been developed for these purposes aren’t effective. So we have to be cautious and acknowledge that there are limitations to digital technology. What could we delegate to a machine in the realm of hospitality? How many humans are there behind training programs and personal support organizations?