field hospitals, hôpitaux de campagne

Emergency logistics for field hospitals

European field hospitals, or temporary medical care stations, are standing by and ready to be deployed throughout the world in the event of a major disaster. The HOPICAMP project, of which IMT Mines Alès is one of the partners, works to improve the logistics of these temporary medical centers and develop telemedicine tools and training for health care workers. Their objective is to ensure the emergency medical response is as efficient as possible. The logistical tools developed in the context of this project were successfully tested in Algeria on 14-18 April during the European exercise EU AL SEISMEEX.

 

European field hospitals, or temporary medical care stations, are standing by and ready to be deployed throughout the world in the event of a major disaster. The HOPICAMP project, of which IMT Mines Alès is one of the partners, works to improve the logistics of these temporary medical centers and develop telemedicine tools and training for health care workers. Their objective is to ensure the emergency medical response is as efficient as possible. The logistical tools developed in the context of this project were successfully tested in Algeria on 14-18 April during the European exercise EU AL SEISMEEX.

Earthquakes, fires, epidemics… Whether the disasters are of natural or human causes, European member states are ready to send resources to Africa, Asia or Oceania to help the affected populations. Field hospitals, temporary and mobile stations where the wounded can receive care, represent a key element in responding to emergencies.

After careful analysis, we realized that the field hospitals could be improved, particularly in terms of research and development,” explains Gilles Dusserre, a researcher at IMT Mines Alès, who works in the area of risk science and emergency logistics. This multidisciplinary field, at the crossroads between information technology, communications, health and computer science, is aimed at improving the understanding of the consequences of natural disasters on humans and the environment. “In the context of the HOPICAMP project, funded by the Single Interministerial Fund (FUI) and conducted in partnership with the University of Nîmes, the SDIS30 and companies CRISE, BEWEIS, H4D and UTILIS, we are working to improve field hospitals, particularly in terms of logistics,” the researcher explains.

Traceability sensors, virtual reality and telemedicine to the rescue of field hospitals

When a field hospital is not being deployed, all the tents and medical equipment are stored in crates, which makes it difficult to ensure the traceability of critical equipment. For example, an electrosurgical unit must never be separated from its specific power cable due to risks of not being able to correctly perform surgical operations in the field. “The logistics operational staff are all working on improving how these items are addressed, identified and updated, whether the hospital is deployed or on standby,” Gilles Dusserre explains. The consortium worked in collaboration with BEWEIS to develop an IT tool for identification and updates as well as for pairing RFID tags, sensors that help ensure the traceability of the equipment.

In addition, once the hospital is deployed, pharmacists, doctors, engineers and logisticians must work in perfect coordination in emergency situations. But how can they be trained in these specific conditions when their workplace has not yet been deployed? “At IMT Mines Alès, we decided to design a serious game and use virtual reality to help train these individuals from very different professions for emergency medicine,” explains Gilles Dusserre. Thanks to virtual reality, the staff can learn to adapt to this unique workplace, in which operating theaters and treatment rooms are right next to living quarters and rest areas in tents spanning several hundred square meters. The serious game, which is being developed, is complementary to the virtual reality technology. It allows each participant to identify the different processes involved in all the professions to ensure optimal coordination during a crisis situation.

Finally, how can the follow-up of patients be ensured when the field hospitals are only present in the affected countries for a limited time period? “During the Ebola epidemic, only a few laboratories in the world were able to identify the disease and offer certain treatments. Telemedicine is therefore essential here,” Gilles Dusserre explains. In addition to proposing specific treatments to certain laboratories, telemedicine also allows a doctor to remotely follow-up with patients, even when the doctor has left the affected area.  “Thanks to the company H4D, we were able to develop a kind of autonomous portable booth that allows us to observe around fifteen laboratory values using sensors and cameras.” These devices remain at the location, providing the local population with access to dermatological, ophthalmological and cardiological examinations through local clinics.

Field-tested solutions

We work with the Fire brigade Association of the Gard region, the French Army and Doctors Without Borders. We believe that all of the work we have done on feedback from the field, logistics, telemedicine and training has been greatly appreciated,” says Gilles Dusserre.

In addition to being accepted by end users, certain tools have been successfully deployed during simulations. “Our traceability solutions for equipment developed in the framework of the HOPICAMP project were tested during the EU AL SEISMEEX Europe-Algeria earthquake deployment exercises in the resuscitation room,” the researcher explains. The exercise, which took place from April 14-18 in the context of a European project funded by DG ECHO, the Directorate-General for European Civil Protection and Humanitarian Aid Operations, simulated the provision of care for victims of a fictional earthquake in Bouira, Algeria. 1,000 individuals were deployed for the 7 participating countries: Algeria, Tunisia, Italy, Portugal, Spain, Poland and France. The field hospitals from the participating countries were brought together to cooperate and ensure the interoperability of the implemented systems, which can only be tested during actual deployment.

La logistique de l’urgence au service des hôpitaux de campagne

The EU-AL SEISMEEX team gathered in front of the facilities for the exercise.

 

Gilles Dusserre is truly proud to have led a project that contributed to the success of this European simulation exercise. “As a researcher, it is nice to be able to imagine and design a project, see an initial prototype and follow it as it is tested and then deployed in an exercise in a foreign country. I am very proud to see what we designed becoming a reality.”

Diatabase: France’s first diabetes database

The M4P consortium has received the go-ahead from Bpifrance to implement its project to build a clinical diabetes database called Diatabase. The consortium headed by Altran also includes French stakeholders in diabetes care, the companies OpenHealth and Ant’inno, as well as IMT and CEA List. The consortium aims to improve care, study and research for this disease that affects 3.7 million people in France. The M4P project was approved by the Directorate General for Enterprise of the French Ministry for Economy and Finance as part of the Investissements d’Avenir (Investments for the Future) program-National fund for the digital society.

 

In France, 3.7 million people are being treated for type 1 or type 2 diabetes, which represents 5% of the population. The prevalence of these diseases continues to increase and their complications are a major concern for public health and economic sustainability. Modern health systems produce huge quantities of health data about the disease, both through community practices and hospitals, and additional data is generated outside these systems, by the patients themselves or through connected objects. The potential for using this massive data from multiple sources is far-reaching, especially for advancing knowledge of diabetes, promoting health and well-being of diabetes sufferers and improving care (identifying risk factors, diagnostic support, monitoring the efficacy of treatment etc.)

Supported by a consortium of multidisciplinary experts, and backed by the Directorate General for Enterprise of the French Ministry for Economy and Finance, the M4P project aims to build and make available commercially a multi-source diabetes database, “Diatabase,” comprising data from hospitals and community practices, research centers, connected objects and cross-referenced with the medical-economic databases of the SNDS (National Health Data System).

The project seeks to “improve the lives and care of diabetes sufferers through improved knowledge and sharing of information between various hospital healthcare providers as well as between expert centers and community practices,” says Dr Charpentier, President of CERITD (Center for Study and Research for Improvement of the Treatment of Diabetes) one of the initiators of the project. “With M4P and Diatabase, we aim to promote consistency in providing assistance for an individual within a context of monitoring by interdisciplinary teams and to increase professionals’ knowledge to provide better care,” adds Professor Brigitte Delemer, Head of the Diabetology department at the University Hospital of Reims and Vice-President of the CARéDIAB network which is also involved in the M4P project.

“The analysis of massive volumes of ‘real-life’ data involves overcoming technical hurdles, in particular in terms of interoperability, and will further the understanding of the disease while providing health authorities and manufacturers with tools to monitor the drugs and medical devices used,” says Dr Jean-Yves Robin, Managing Director of OpenHealth, a company specializing in health data analysis which is part of the M4P project.

The M4P is supported by an expert consortium bringing together associations of healthcare professionals active in diabetes, such as CERITD, the CARéDIAB network, the Nutritoring company; private and public organizations specializing in digital technologies with Altran, data analysis with OpenHealth and ANT’inno, associated with CEA List for the semantic analysis and use of unstructured data, and finally, Institut Mines Telecom, which is providing its Teralab platform, a secure accelerator for research projects in AI and data, and its disruptive techniques for preventing data leaks, misuse and falsification based on data watermarking technology as well as methods for processing natural language to reveal new correlations and facilitate prediction and prevention.

“Thanks to the complementary nature of the expertise brought together through this project,  its business-focused approach and consideration of professional practices, M4P is the first example in France of structuring health data and making it available commercially in the interest of the public good,” says Fabrice Mariaud, Director of Programs, Research and Expertise Centers in France for Altran.

The consortium has given itself a period of three years to build Diatabase and to make it available for use, to serve healthcare professionals and patients.

breast cancer

Superpixels for enhanced detection of breast cancer

Deep learning methods are increasingly used to aid medical diagnosis. At IMT Atlantique, Pierre-Henri Conze is taking part in this drive to use artificial intelligence algorithms for healthcare by focusing on breast cancer. His work combines superpixels defined on mammograms and deep neural networks to obtain better detection rates for tumor areas, thereby limiting false positives.

 

In France, one out of eight women will develop breast cancer in their lives. Every year 50,000 new cases are recorded in the country, a figure which has been on the rise for several years. At the same time, the survival rate has also continued to rise. The five-year survival rate after being diagnosed with breast cancer increased from 80% in 1993 to 87% in 2010. These results can be correlated with a rise in awareness campaigns and screening for breast tumors. Nevertheless, large-scale screening programs still have room for improvement. One of the major limitations of this sort of screening is that it results in far too many false positives, meaning patients must come back for additional testing. This sometimes leads to needless treatment with serious consequences: mastectomy, radiotherapy, chemotherapy etc. “Out of 1,000 participants in a screening, 100 are called back, while on average only 5 are actually affected by breast cancer,” explains Pierre-Henri Conze, a researcher in image processing. The work he carries out at IMT Atlantique in collaboration with Mines ParisTech strives to reduce this number of false positives by using new analysis algorithms for breast X-rays.

The principle is becoming better-known: artificial intelligence tools are used to automatically identify tumors. Computer-aided detection helps radiologists and doctors by identifying masses, one of the main clinical signs of breast cancer. This improves diagnosis and saves time since multiple readings do not then have to be carried out systematically. But it all comes down to the details: how exactly can the software tools be made effective enough to help doctors? Pierre-Henri Conze sums up the issue: “For each pixel of a mammography, we have to be able to tell the doctor if it belongs to a healthy area or a pathological area, and with what degree of certainty.”

But there is a problem: algorithmic processing of each pixel is time-consuming. Pixels are also subject to interference during capture: this is “noise,” like when a picture is taken at night and certain pixels are whited out. This makes it difficult to determine whether an altered pixel is located in a pathological zone or not. The researcher therefore relies on “superpixels.” These are homogenous areas of the image obtained by grouping together neighboring pixels. “By using superpixels, we limit errors related to the noise in the image, while keeping the areas small enough to limit any possible overlapping between healthy and tumor areas,” explains the researcher.

In order to successfully classify the superpixels, the scientists rely on descriptors: information associated with each superpixel to describe it. “The easiest descriptor to imagine is light intensity,” says Pierre-Henri Conze. To generate this information, he uses a certain type of deep neural network, called a “convolutional” neural network. What is their advantage compared to other neural networks? They determine by themselves which descriptors are the most relevant in order to classify superpixels using public mammography databases. Combining superpixels with convolutional neural networks produces especially successful results. “For forms as irregular as tumor masses, this combination strives to identify the boundaries of tumors more effectively than with traditional techniques based on machine learning,” says the researcher.

This research is in line with work by the SePEMeD joint laboratory between IMT Atlantique, the LaTIM laboratory, and the Medecom company, whose focus areas include improving medical data mining. It builds directly on research carried out on recognizing tumors in the liver. “With breast tumors, it was a bit more complicated though, because there are two X-rays per breast, taken at different angles and the body is distorted in each view,” points out Pierre-Henri Conze. One of the challenges was to correlate the two images while accounting for distortions related to the exam. Now the researcher plans to continue his research by adding a new level of complexity: variation over time. His goal is to be able to identify the appearance of masses by comparing different exams performed on the same patient several months apart. The challenge is still the same: to detect malignant tumors as early as possible in order to further improve survival rates for breast cancer patients.

Also read on I’MTech

 

neutrality

The End of Web Neutrality: The End of the Internet?

Hervé Debar, Télécom SudParis – Institut Mines-Télécom, Université Paris-Saclay

[divider style=”normal” top=”20″ bottom=”20″]

At end 2017, a decision issued by the  Federal Communication Commission (FCC), the American agency responsible for regulating the American telecom sector (equivalent of the French ARCEP and the European BEREC), has changed the status of American Internet service providers.

However, this change cannot take place in Europe due to the Regulation on open internet access adopted in 2015. Still, this provides a good opportunity for reflecting on the neutrality of internet services.

An internet services provider (ISP in the USA, or FAI in France) provides services to subscribers. It is seen as a supplier of neutral services that should not influence how subscribers use the network. This contrasts with television channels, which have the right to manage their broadcasts as they wish and can therefore offer differentiated broadcasting services.

A recurring issue in the USA

In the USA, there has long been a call for deregulating the sector of internet service providers. In the early 2000s, Voice over IP (VoIP) was introduced. Telephone communications were expensive in the USA at the time; this system, which made it possible to make free phone calls, therefore met great success. The same phenomenon can be seen today with the service provided by Netflix, which can freely provide its subscribers with streaming video content.

Since 2013, several attempts have been made to put an end to the legal notion of “common carrier” as applied to American internet access providers.

This concept of American and English law requires the entities subject to this type of regulation to transport persons and goods without discrimination. Internet service providers are therefore required to transport network packets without any differentiation regarding the type or origin of service.

This change does not have unanimous support, including within the FCC. It will allow American ISPs to manage traffic in a way that enables them to differentiate the data transport services they offer to customers.

There is therefore an opposition between service providers (the pipes) and content providers (the services, the most emblematic being the Big 5 Tech companies: Google, Apple, Facebook, Amazon and Microsoft). To summarize, the service providers complain that the content providers are taking advantage of the pipes, and even clogging them, without contributing to the development of the infrastructure. To which they respond that the service providers are funded by subscriptions, while the content they provide free of charge offers the network its attractiveness.

It is also important to note that some Big 5 Tech companies are their own ISP. For example, Google is the ISP for Kansas City, and is also probably the largest owner of optical fiber in the world.

A few figures

Over the next ten years, the French operators indicate that they will need to invest €72 billion in developing their networks to support very high-speed connections and 5G (figure provided by Michel Combot, FFTelecoms).

In 2017, there were 28.2 million fixed network subscribers and 74.2 million SIM cards in France.

I estimate the average monthly costs for the fixed network subscriptions (excluding modem) at around €30, those of mobile subscriptions at around €10 (excluding equipment, with an average cost including equipment of around €21).

If the investment is absorbed by the fixed subscriptions alone, this comes to around €21 per month, or 2/3 the cost of the subscription. If it is absorbed by all of the subscriptions, this amounts to a little less than €6 per month, which represents a small portion of the fixed subscription, but a significant portion of the mobile subscription.

Overall, the investment represents 38% of the revenue generated during this period, based on the assumptions above.

In conclusion, the investment appears sustainable, even in a European market that is significantly more competitive than the U.S. market, where the costs of internet subscriptions are three times more expensive than European costs. It therefore appears possible for the ISP to maintain their level of investment.

However, it is also very clear that the growth in turnover for GAFAM companies is nowhere near that of telecom operators and ISPs. The issue of services is therefore very interesting, yet it cannot be limited to traffic management issues.

Traffic management, a necessary evil

The practice of managing traffic has long existed, for example to support offers for virtual private networks (VPN MPLS) for businesses.

These same mechanisms can be used to guide the responses of certain services (such as anycast) in order to fight denial of service attacks. They are also used to manage routing. In addition, they enable open connection sharing, enabling you to let guests use your modem without hindering your own use. In short, they certainly serve a purpose.

An analogy

We can compare what is happening on the internet to road networks. The ISPs manage the traffic lanes and the services manage the destinations. Since the road network is shared, everyone can access it without discrimination. However, there are rules of use in the form of a driver’s manual, for the internet this is defined by the IP protocol. There are also temporary changes (traffic lights, detours, stop signs) that affect the flow of traffic. These are the mechanisms that are used to manage traffic.

Traffic management is a legitimate activity, and operators are making the most of it. They see traffic management as a significant cost. Therefore, in their opinion, it is unnecessary to have any regulation provided by an authority, because the economic aspect of managing networks necessarily results in their neutrality in terms of content. They therefore see no interest in modifying the traffic from an economic perspective.

This argument is hardly acceptable. We have already seen examples of these kinds of practices, and many of the tools have already been deployed in the network. Traffic management will continue to exist, but it should not be further developed.

Regulating services

Using this same analogy, internet services can be compared to cities. Their purpose is to attract visitors. They reap economic benefits, in the form of the visitors’ spending, without contributing to the development of the national road network that enable visitors to access them. This system works because the state collects a large share of the tax and is the guarantor of the public good. It is therefore its duty to allow access to all the cities, without discrimination. The state also ensures that equal laws are established in the different cities; which I believe is missing in the internet world.

Internet services: new common carriers

Internet services have become common goods, with the internet’s role as a universal platform making them just as indispensable as the “pipes” used to access them.

It would therefore be wise to study the regulation of services to complement network regulations. Internet services suffer from very significant mass effects, in which the winner takes the majority of the market and almost all the profits.

The power of services

This bias occurs through the analysis of behavioral data collected during interactions with these services. It is further reinforced by algorithmic biases, which reinforce our behavioral biases. We end up receiving from the Net only what we might like. Or worse, what the Net thinks of us.

This again brings us to the problem of data. Yes, statistical trends do enable us to predict certain future events. This is the basis for insurance. For many people, this makes sense. But for the internet world, this involves building communities that gradually become isolated. This makes sense commercially for the GAFAM because social approval from one’s community increases impulsive buying phenomena. These purchases are made, for example, when Amazon sends you a message related to products you looked at a few days before, or when Google targets ads related to your email.

In addition to the need for neutral pipes, it would therefore be worthwhile to reflect on the neutrality of services and algorithms. With the arrival of the General Data Protection Regulation, this should help strengthen our trust in the operation of the all the networks and services that we have become so dependent upon. This is all the more important since internet services have become an increasing source of income for a significant percentage of the French population.

Hervé Debar, Head of the Networks and Telecommunication Services Department at Télécom SudParis, Télécom SudParis – Institut Mines-Télécom, Université Paris-Saclay

The original version of this article was published in French on The Conversation.

cybersecurity

Cybersecurity: new times, new challenges

Editorial.

Who am I? A white man, almost 30. I wear hoodies and hack websites belonging to prestigious organizations like the CIA from my parents’ basement. Above all, I am thick-skinned. Have you guessed? I am, of course, a stereotypical hacker!

Movies and TV series continue to propagate this false and dated image. But due to changes in the internet, practices and infrastructures supported by the network, the threats we are facing are no longer the same as those in the late 20th century. The means used to attack organizations continue to grow, progressively revealing the absurdity of the stereotypical isolated attacker motivated by dark intentions rather than profit.

This series of articles aims to highlight a few iconic examples of the new cybersecurity challenges, and at the top of the list is: protecting the Internet of Things. Sensors are becoming increasingly prevalent in home automation, sports and fashion, and with them come new threats for potential attacks. Jean-Max Dutertre, a researcher at Mines Saint-Étienne, describes the risks these connected objects present and the solutions being implemented. In a second article, Jean-Luc Danger, researcher at Télécom ParisTech, expands on the list of solutions to these new threats.

[one_half]

[/one_half][one_half_last]

[/one_half_last]

“New challenges” does not only refer to new sectors or new systems in need of protection. With the growth of digital solutions in traditional fields, cybersecurity must also be developed in areas where it has long been considered of secondary importance. This is the work of Yvon Kermarrec at IMT Atlantique. As a member of the Research Chair for the Cyberdefense of Naval Systems, he explains why ships, and the entire marine sector, must tackle this issue head on.

The same is true for telephony, a sector which has long been affected by crime, but has benefited from relative indifference in terms of large-scale fraud. Impersonation to extort money, call forwarding schemes and telemarketing abuses are widespread. Aurélien Francillon, a researcher at Eurecom, is seeking first of all to better understand the organization of the participants and the structural causes of fraud. His findings are proving useful in the search for defense strategies.

Thanks to all the research efforts in the area of cybersecurity, the power relationship between attackers and defenders has become more balanced than it was a few years or decades ago. Organizations are increasingly well prepared and able to respond to cyber-attacks. In conclusion, Frédéric Cuppens (IMT Atlantique) and Hervé Debar (Télécom SudParis), both members of the Cybersecurity and Critical Infrastructures Chair at IMT, take a look at this topic. They review the latest technical solutions and strategic approaches that offer protection from even the most insidious threats.

[divider style=”dotted” top=”20″ bottom=”20″]

Since the topic of cybersecurity is so vast and complex, this series of articles—like all of our I’MTech articles—does not attempt to be exhaustive.  To further explore this topic, we recommend this list of articles from our archives:

[one_third]

[/one_third][one_third]

[/one_third][one_third_last]

[/one_third_last]

[one_half]

[/one_half][one_half_last]

[/one_half_last]

[one_half]

[/one_half][one_half_last]

[/one_half_last]

 

cyberdefense

Cyberdefense seeks to regain control

Between attackers and defenders, who is in the lead? In cybersecurity, the attackers have long been viewed as the default winners. Yet infrastructures are becoming better and better at protecting themselves. Although much remains to be done, things are not as imbalanced as they might seem, and research is providing valid cyberdefense solutions to help ensure the security and resilience of computer systems.

[divider style=”normal” top=”20″ bottom=”20″]

This article is part of our series on Cybersecurity: new times, new challenges.

[divider style=”normal” top=”20″ bottom=”20″]

 

At the beginning of a chess game, the white pieces are always in control. This is a cruel reality for the black pieces, who are often forced to be on the defensive during the first few minutes of the game. The same is true in the world of cybersecurity: the attacker, like the player with the white chess pieces, makes the first blow. “He makes his choices, and the defender must follow his strategy which puts him in a situation of inferiority by default,” observes Frédéric Cuppens, a cybersecurity researcher at IMT Atlantique. This reality is what defines the strategies adopted by companies and software publishers, with the difference that unlike the black chess pieces, the cyber-defenders cannot counter-attack. It is illegal for an individual or an organization to respond by attacking the attacker with a cyber-attack. In this context, the defense plan can be broken down into three parts: protecting oneself to limit the attacker’s initiative, deploying defenses to prevent him from reaching his goal and, above all, ensuring the resilience of the system if the attacker succeeds in his aim.

This last possibility must not be overlooked. “In the past, researchers quickly realized that they would not succeed in preventing every attack,” recalls Hervé Debar, a researcher in cybersecurity at Télécom SudParis. From a technical point of view, “it is impossible to predict all the potential paths of attack,” he explains. And from an economic and organizational perspective, blocking all the doors using preventive measures or in the event of an attack makes the systems inoperable. It is therefore often more advantageous to accept to undergo an attack, cope with it and end it, than it is to attempt to prevent it at all costs. Hervé Debar and Frederic Cuppens, members of the Cybersecurity and critical infrastructures Chair at IMT (see box at the end of the article), are very familiar with this arbitration. For nuclear power plants for example, shutting everything down to prevent a threat is unthinkable.

Reducing the initiative

Despite these obstacles facing cyber-defense, the field is not lagging behind. Institutional and private organizations’ technical and financial means are generally greater than those of cybercriminals — with the exception of computer attacks on governments. Once a new flaw has been discovered, the information spreads quickly. “Software publishers react quickly to block these entryways, things travel very fast,” says Frédéric Cuppens. The National Vulnerabilty Database (NVD), is an American database that lists all known vulnerabilities and serves as a reference for cyber-defense experts. Beginning in 1995 with a few hundred flaws, it now records thousands of new entries each year, with 15,000 in 2017. This just shows how important it is to share information to allow the fastest possible response and shows the involvement of communities of experts in this collective strategy.

But it’s not enough,” warns Frédéric Cuppens. “These databases are crucial, but they always come after the event,” he explains.  To reduce the attacker’s initiative when it is made, the attack must be detected as soon as possible. The best way to achieve this is to adopt a behavioral approach. To do so, data is collected to analyze how the systems function normally. Once this learning step is completed, the system’s “natural” behavior is known. When a deviation occurs in the operations in relation to this baseline, it is considered a potential danger sign. “In the case of attacks on websites, for example, we can detect the injection of unusual commands that betrays the attacker’s actions,” explains Hervé Debar.

Connected objects, like surveillance cameras, are a new target for attackers. Cyber-defense must therefore be adapted accordingly.

 

The boom of the Internet of Things has brought about a change of scale that reduces the effectiveness of this behavioral approach, since it is impossible to monitor each object individually. In 2016, the botnet Mirai took control of connected surveillance cameras and used them to send requests to servers belonging to the DNS provider, Dyn. The outcome: websites belonging to Netflix, Twitter, Paypal and other major Internet entities whose domain names were managed by the company became inaccessible. “Mirai has changed things a little,” admits the researcher from Télécom SudParis. Protection must be adapted in the light of the vulnerabilities created by connected objects. It is inconceivable to monitor whole fleets of cameras in the same way we would monitor a website. It is too expensive, too difficult. “We therefore protect the communication between objects rather than the objects themselves and we use certain objects as sentinels,” explains Hervé Debar. Since an attack on cameras affects them all, it is only necessary to protect and observe a few to detect a general attack. Through collaboration with Airbus, the researcher and his team have shown that monitoring 2 industrial sensors out of a group of 20 was sufficient for detecting an attack.

Ensuring resilience

Once the attack has been detected, everything must be done to ensure it has the least possible impact. “To thwart the attacker and seize the initiative, a good solution is to use the moving target defense,” Frédéric Cuppens explains. This technique is relatively new: the first academic research on the subject was conducted in the early 2010s, and the practice has started to spread among companies for around two years now.  It involves moving the targets of the attack to safer locations. It is easy to understand this cyber-defense strategy in the area of cloud computing. The functions of a virtual machine that is attacked can be moved to another machine that was unaffected. The service therefore continues to be provided. “With this means of defense, all that is needed is for the IP addresses and routing functions to be dynamically reconfigured,” the IMT Atlantique researcher explains. In essence, this means directing all the traffic related to a department through a clone of the attacked system. This technique is a major asset, especially for those facing particularly vicious attacks. “The systems’ resilience is paramount in protecting against polymorphic malware, which changes as it spreads,” says Frédéric Cuppens.

While the moving target strategy is effective for dematerialized systems, it is more difficult to deploy in physical infrastructures. Connected robots in a production line and fleets of connected objects cannot be cloned and moved as needed. For these systems, functional diversification is used instead. “For equipment that runs on specific hardware with specific software, the company can reproduce the system, but with different hardware and software,” explains Frédéric Cuppens. Since the security vulnerabilities are inherent in the given hardware or software, if one is attacked, the other one should still remain functional.  But this protection represents an added cost. In response to this argument, the researcher replies that the safety of infrastructures and individuals is at stake. “For an airplane, we are all willing to accept that 80% of the cost be dedicated to safety, because so many lives are at stake. The same is true for infrastructures: losing control of certain types of equipment can cost lives, whether it be the employees nearby or that of citizens in extreme cases involving critical infrastructure.”

How can cyber-defense be further developed?

People’s attitudes about cybersecurity must change. This is one of the major challenges we must address to further limit what attackers can achieve. One of the keys is to convince companies of the importance of investing in this area. Another is disseminating good practices. “Today we know which uses ensure better security right from the development stage for software and systems,” says Hervé Debar. “Using languages that are more robust than others, tools that can check how programs are written, proven libraries for certain functions, secure programming patterns, making test plans…” Yet these practices are far from routine for developers. All too often, their objective is to offer an application or functional system as quickly as possible, to the detriment of security and robustness.

It is now critical that this paradigm be revised. The use of artificial intelligence raises many questions. While it offers the potential for designing dynamic solutions for detecting intrusions, it also opens the door to new threats. “The principle behind using AI is that it adapts to situations,” explains Frédéric Cuppens. “But if systems are constantly adapting, how will it be possible to determine ahead of time if the changes are caused by AI or an attack?” To prevent cybercriminals from taking advantage of this gray area, the systems’ security must be guaranteed and the way they operate must be transparent. Yet today, these two dimensions are far from being at the forefront of most developers’ minds. “The security of connected objects is being left behind,” says Frédéric Cuppens. And it goes further than the question of processes: “being careful about what we do with information technology is a state of mind,” Hervé Debar adds.

[box type=”info” align=”” class=”” width=””]

A chair for protecting critical infrastructure

Telecommunications, defense, energy… These sectors are vital for the proper functioning of the country. In the event of a breakdown, essential services are no longer provided to citizens. With the rise in connected objects within these critical infrastructures, the risk of cyberattacks is increasing. New cyber-defense programs must therefore be developed to protect them.

This is the whole purpose of the IMT Cybersecurity of Critical Infrastructures Chair. It brings together researchers from IMT Atlantique, Télécom SudParis and Télécom Paristech to focus on the issue. The scientists work in close collaboration with industry stakeholders affected by these issues. Airbus, Orange, EDF, La Poste, BNP Paribas, Amossys and Société Générale are all partners of this Chair. They provide the researchers with real-life cases of risks and systems that must be protected to improve the current state of cyber-defense.

Find out more

[/box]

fraud

Fraud on the line

An unsolicited call is not necessarily from an unwelcome salesman. It is sometimes a case of a fraud attempt. The telephone network is home to many attacks and most are aimed at making a profit. These little-known types of fraud are difficult to recognize and difficult to fight.

[divider style=”normal” top=”20″ bottom=”20″]

This article is part of our series on Cybersecurity: new times, new challenges.

[divider style=”normal” top=”20″ bottom=”20″]

 

How many unwanted phone calls have you received this month? Probably more than you’d like. Some of these calls are undoubtedly from companies you have signed a contract with. This is the case with your telephone operator which regularly assesses its customer satisfaction. Other calls are from telemarketing companies that acquire phone numbers through legal trade agreements. Energy suppliers in particular buy lists of contacts in their search for new customers. On the other hand, some calls are completely fraudulent. Ping calls, for example, are made by calling a number for a few seconds to leave a missed call on the recipient’s telephone. The recipient who returns the call is then forwarded to a premium-rate number. The call is often redirected to a foreign location, an expensive destination. The service the caller connects with then offers them to sign up for a contract in exchange for a subsequent payment. The International Telecommunications Union sees these “cash-back” schemes as abusive. “Some calls are also made by robots which scan lists of numbers to identify if they belong to individuals or companies,” Aurélien Francillon explains. As an expert in network security, this researcher at EURECOM works specifically on the issue of telephone fraud.

Although the scientific community is tackling this issue head on, the topic is more extensive than it first appears. Individuals are not the only victims; companies are also vulnerable to these types of attacks. In order to direct external calls and manage calls between internal telephone lines, companies must establish complex telephone systems. “Companies have telephone exchanges which ensure all the private telephones are interconnected,” explains Aurélien Francillon. These exchanges are called PABX units (private automatic branch exchange) and they typically enable functions such as “do not disturb” and “call forwarding”. However, attackers can take control of these exchanges. “Scammers often carry out a PABX attack over the weekend when the employees are not present,” the researcher explains. “Again, they will call international, premium-rate numbers using the controlled lines to make money.” The attack is only detected a few days later. In the meantime, the cybercriminals have potentially made thousands of euros.

Taxonomy of fraud

The first challenge in the fight against such attacks is understanding the motivations and procedures. The same technique, such as PABX hacking, is not necessarily used by all scammers for the same purposes. And several means can be used for the same objective, such as extortion. To make sense of the issue, the researchers at EURECOM developed a taxonomy of fraud. “It is a grid which brings together all the knowledge on the attacks to better classify them and understand the patterns scammers use,” Aurélien Francillon explains. This is a big contribution to the scientific community. Until now there had not been a global and comprehensive vision of the topic of telephone network security. To create this taxonomy, the researcher and a PhD student, Merve Sahin, first needed to define fraud. They believe it is a “means of obtaining illegitimate profit by using a specific technique made possible by a weakness in a system, which is itself due to root causes.” It was in listing these deeper, root causes behind the flaws as well as the resulting weaknesses and techniques they enabled that the researchers succeeded in creating a complete taxonomy in the form of a grid.

 

Nearly 150 years of technological developments make studying flaws in the telephone network a complex matter.

 

Thanks to a better understanding of the threats, the scientists can now consider defense strategies. This is the case for “over-the-top” or OTT bypass fraud, which involves diverting a call to make a profit. When a user wants to call someone in a foreign country, the operator delegates the routing of the call to transit operators. These operators transport the call over continents or oceans using their cables. To optimize rates, these operators can themselves delegate the call routing to local operators who can transfer the calls for a country, for example. “Each operator involved in the routing will recover part of the income from the call. Each party’s goal is to sell their routing service to the operators for which they will route the calls at a higher price than they will pay the next operators which will take care of terminating the calls.” For one call, over a dozen stakeholders can be involved, without the caller’s operator necessarily knowing it. “This leads to gray areas that leave room for little-known stakeholders whose legitimacy and honesty cannot necessarily be ensured. Among them are inexpensive, malicious operators who, at the end of the chain, will redirect the call to a vocal chat application on the recipient’s mobile phone. The operator of the person receiving the call therefore does not receive any income on this call, which is why it is called “over-the-top” (OTT), since it passes over the stakeholders who are supposed to be involved.

Because so many stakeholders are involved in this type of fraud, the taxonomy helps to identify the relationships between them. “We had to understand the international routing system, and the billing process between the operators, which works somewhat like a stock market,” Aurélien Francillon explains. After studying the stakeholders and mechanisms, the researchers were able to reflect on how cases of fraudulent routing could be detected… And they realized that there is currently no technical solution to prevent them. It is virtually impossible to determine if a voice stream on IP that arrives at a mobile application — sometimes encrypted and in proprietary formats — is from a classic telephone call (and there has been a bypass) or from a free call made using an application (in which case there is no bypass).  OTT bypass fraud is therefore hard to detect. However, this does not mean that no solutions exists, rather it is necessary to look to other solutions that are not technical ones. “The most relevant solutions on this type of subject are more economic or legal,” the researcher admits. “However, we believe it is crucial to study these phenomena in a scientific manner in order to make the right decisions, even if they are not purely technical.”

Using legal means if necessary

Adapting legislation to prevent these abuses is another approach that has already been used in other cases. To prevent too many spam phone calls, French law provided for the establishment of a list for opposing cold sales calls. Since 1 June 2016, all consumers have the right to sign up to be added to the Bloctel list. The service ensures that the lists telemarketers provide are cleaned before the next telemarketing campaign. To estimate the impact of such a measure, Aurélien Francillon and his team have done some preliminary work at a European level. They signed 800 numbers up for the block lists of 8 countries in Europe. The objective was to comparatively assess the effectiveness of the lists between countries. Sometimes, the lists truly reduce the number of unwanted calls received, and this is the case for France in particular. However, they sometimes have the opposite effect. In England, the block lists are sent to the telemarketing companies so that they themselves remove the numbers of individuals who have expressed their opposition to telemarketing. “Obviously, some companies are not playing along,” Aurélien Francillon observes. “We even observed a case in which only the numbers on the block list had been called, suggesting that it had been used to target people.”

This is one of the limitations of the legal approach: just like the technical solution, it depends on how it is implemented. If it is well done, it can prove to be a good alternative when technology does not offer a solution to such complex problems. In the Bloctel example, the solution also relies heavily on information from consumers. “It is important to give them a comprehensive vision of the problem so that they can understand what can be hidden behind a sales call or a missed call on their mobile,” the researcher insists. Thanks to the taxonomy that EURECOM has developed, we expect that the scientific community will be able to better describe and study all cases of potential fraud. Before looking for solutions, this will first help effectively inform individuals and companies about the risks on the line.

[box type=”shadow” align=”” class=”” width=””]

A robot fighting spam calls?

To help combat unwanted calls, developers have designed Lenny, a conversational robot intended to make telemarketers waste as much time as possible. With the voice of an elderly person, he regularly asks to have questions repeated, or begins telling a story about his daughter that is completely unrelated to the telemarketer’s question. Unlike the Alexia and Google Home assistants, Lenny’s interactions are simply prerecorded and are looped in the conversation.

Researchers at EURECOM and Télécom ParisTech have studied what makes this robot work so well — it has made some sellers waste 40 minutes of their time! Based on 200 conversations recorded with Lenny, they classified different types of calls: requests for funding for political parties, spam, aggressive or polite sellers, etc. The feature that makes Lenny so effective is that he appears to have been designed based on the results of conversational analysis, with initial results dating back to the 1970s. It identifies the silences in conversations which triggers prerecorded sequences. His responses are particularly credible because they include changes in intonation and turning points in the conversation.

[/box]

ships

Protecting ships against modern-day pirates

Cybersecurity, long viewed as a secondary concern for naval systems, has become increasingly important in recent years. Ships can no longer be seen as isolated objects at sea, naturally protected from cyber-attacks. Yvon Kermarrec, a researcher in computer science at IMT Atlantique, leads a research chair on cybersecurity in partnership with the French Naval School, Thales and Naval Group. He explains the cyber-risks affecting military and civil ships and the approaches used to prevent them.

[divider style=”normal” top=”20″ bottom=”20″]

This article is part of our series on Cybersecurity: new times, new challenges.

[divider style=”normal” top=”20″ bottom=”20″]

 

Are ships just as vulnerable to cyber-attacks as other systems?

Yvon Kermarrec: Ships are no longer isolated, even in the middle of the Atlantic Ocean. Commercial boats frequently communicate with ports, the authorities and weather centers, for example. They must remain informed about currents, weather conditions, the position of icebergs etc. They must also inform shipyard owners and ports of arrival when they are running late. For military ships, there is also the communication required for coordinating operations with other naval ships, fighter planes, control centers, etc. This means there are several data streams flowing to and from a given boat. And these streams are just as vulnerable as those connecting an intelligent car with its infrastructure or a computer to its network.

What are the cyber security risks for a boat?

YK: There are many different types of cyber-attack. For example, a military frigate’s combat system software features several millions of lines of code, not to mention the management and control systems for the boat itself, and for its engines and steering in particular. Possible attacks can involve the software pirate altering the digital maps used for navigation or the GPS data, causing the captain to misjudge the boat’s position. This could cause the boat to be lost at sea or run aground by steering it towards a shelf or reef. This is a technique that can be used by pirates who want to take the goods in a container ship. It is also possible to attack the engine controls and perform maneuvers at full speed which would destroy the propulsion system and cause the boat to drift. Finally, we could also imagine someone causing the door of a ferry to open while it is in open water, leading to a major leak and possibly shipwreck.

What do cyber-attacks on boats look like?

YK: They can be generic, meaning that they are generic attacks that impact the boat without specifically targeting it. This could, for example, involve email phishing that lures an employee or tourist on board into opening an email containing a virus. This virus would then spread to the boat’s computer system. There is nothing to prevent ransomware on a boat, which would lock all the computers on board and require a ransom in exchange for unlocking the system. Attacks can also be specific. A criminal can find a way to install spyware on board or convince a systems operator on board to do so. This software would then be able to spy on and control the equipment and transmit sensitive information, such as the boat’s position and actions. The installation of this type of software on a military vessel poses very serious security concerns.

How can all of these attacks be detected?

YK: This is the big challenge we’re facing, and it is what drives part of our research. The goal is to detect anomalies which do not normally occur. We look for early warning signs of attacks. A message that an outer door will open while a ferry is cruising is not normal. To anticipate a door opening, we try to detect the orders that must occur before the action occurs and analyze the context of these actions.

Once the attack has been detected, what strategy is taken?

YK: The first step is to accept that the system will be attacked. Then we must measure the impact to assess the situation and prevent everything from shutting down. This part is called cyber-resiliency: working to ensure that as many functions as possible are maintained to limit the consequences of the attack.  If the navigation system is affected, such as the GPS information, the captain must be able to shut down this part of the system and maintain the steering commands. It is certainly inconvenient to operate without the GPS, but there is always the possibility of getting out a map while the navigation system is being restarted or reinstalled. In the case of an action on the external door of a ship out at sea, he will decide to shut down the entire control system for opening the doors. And if it is necessary to open or close other doors, the teams on board can do so manually. It is a tedious procedure, but a much better alternative to dealing with a leak. Research on the subject of detection and action during attacks focuses on finding the necessary means for isolating the various systems and ensuring that the attacks do not spread from one system to another. At the same time, we are also working on defense systems using cryptographic mechanisms. In this case, we are confronted with familiar problems and strategies, similar to those used to protect the Internet of Things.

Ultimately this is all very similar to what is done for communicating cars. But are these problems taken into consideration in the marine setting?

YK: They are beginning to make headway, but for a long time boats were seen as little isolated factories out at sea which were not vulnerable to computer attacks. One of the major challenges is therefore to raise awareness. Sailors often see cybersecurity as a constraint. What they see are the actions they are not able to perform. Yet we all know the limitations of models which ban things without explanation… The sailors and all those involved could potentially be affected both individually and collectively.

What is being done to raise awareness about these issues in the marine setting?

YK: We are addressing this issue in the context of the Chair of Cyber Defense for Naval Systems, which includes the French Naval School and IMT Atlantique. For the Naval School, we developed a cyber-security curriculum for cadet officers. We presented concrete case studies and practical assignments on platforms developed by the Chair’s PhD students. In our discussions with businesses, we now see that ship-owners are taking cyber-risks very seriously. On a global level, the National Cybersecurity Agency of France (ANSSI) and the International Maritime Organization (IMO) are working to address the cybersecurity of ships and port infrastructures. They are therefore responding to the growing concerns in the civil maritime sector. Interest in the subject has greatly increased due to current events and threats that have materialized. Currently, IT security risks for ships are taken very seriously because they could greatly impact international trade and the environment. After all, this maritime context is where the term “pirate” first emerged. There are still considerable challenges and issues at stake for individuals and nations.

 

cyber-attacks

Using hardware to defend software against cyber-attacks

Software applications are vulnerable to remote attacks via the internet or local networks and are cyber-attackers’ target of choice. While methods combining hardware and software have already been integrated into the most recent processors to prevent cyber-attacks, solutions based solely on hardware, which by definition cannot be remotely attacked, could soon help defend our computer programs. Jean-Luc Danger, a researcher at Télécom ParisTech, works on these devices that are invulnerable to software attacks.

[divider style=”normal” top=”20″ bottom=”20″]

This article is part of our series on Cybersecurity: new times, new challenges.

[divider style=”normal” top=”20″ bottom=”20″]

 

Nothing and no one is infallible, and software is no exception. In practice, it is very difficult to design a computer program without flaws. Software is therefore on the frontline of cyber-attacks. This is especially true since software attacks, unlike hardware attacks locally targeting computer hardware, can be carried out remotely using a USB port or internet network. These cyber-attacks can affect both individuals and companies. “Compared to software attacks, hardware attacks are much more difficult to carry out. They require the attacker to be in the vicinity of the targeted machine, and then to observe and disrupt its operation,” explains Jean-Luc Danger, a researcher in digital and electronic systems at Télécom ParisTech.

So what are the solutions for protecting software? “We intuitively know that if we design antivirus software to protect against software attacks, this protective software can itself be the victim of an attack,” says Jean-Luc Danger. Hardware protection devices, which cannot be remotely attacked, could therefore offer an effective solution for improving the cybersecurity of computer programs from the threat of software attacks.

Hybrid methods for countering attacks

Software, or a computer program, is a series of instructions performed sequentially. A program can therefore be presented as a graph, in which each node represents an instruction. This flowchart, or control flow graph, helps ensure that the series of nodes are correctly performed.

Each point symbolizes an instruction, and the arrows which connect them represent
the sequence for executing these instructions.

During a cyber-attack, the attacker identifies a flaw in order to inject code and integrate invalid instructions, or use the existing code to change the execution sequence of the instructions. The malicious code can for example allow the attacker to access the targeted system’s memory.

supports matériels attaques logicielles

Here, a code injection attack, with the execution of an invalid series of instructions.

supports matériels attaques logicielles

In the case of a code reinjection attack, the flow of the execution of tasks is altered.

There are many different methods for protecting software: antiviruses, which detect infected programs, solutions for making the code unreadable, programs that check the integrity of the control flow graph…” Jean-Luc Danger explains. Some hybrid solutions, involving both hardware and software, are already integrated into current processors. For example, memory management units enable each program to have a dedicated memory range protected by a virtual address, which limits the damage in the event of a cyber-attack. In addition, processors can be equipped with a virtualization device on which a virtual machine can be installed with its own operating system. If the first system suffers a software attack, the virtual machine could take over.

Although these are hardware solutions, since they involve the processor being physically altered, they require a minimum amount of configuration and software writing in order to work, which makes them vulnerable to software attacks. However, hardware-only solutions are being developed to protect from cyber-attacks.

Stacks of memory “plates” and digital signatures

There is already a fully hardware-based solution that will soon be integrated into Intel processors: shadow stacks. These stacks of hardware offer an interesting solution for preventing cyber-attacks targeting the sub-programs,” Jean-Luc Danger explains.

Within a program, an instruction can refer to another series of instructions, which then represents a sub-program. Once the sub-program has run, the data is sent back to the initial starting point to be reintegrated into the main program’s chain of instructions: this is the critical moment, when a cyber-attack can divert the data and send it to an abnormal instruction.

supports matériels attaques logicielles

In the event that a sub-program is run
(here, the chain consists of two nodes, to the right of the main program),
the data can be sent back to the wrong instruction during a cyber-attack.

The shadow stack is intended to stop this type of attack. This “stack” mechanism, in which the memory is physically stored in the form of “stacked plates”, leaves a physical trail of the nodes travelling to and from the sub-program. “The node’s address is ‘stacked’ on the first ‘plate’ when the sub-program runs and is ‘unstacked’ once the operation is complete,” explains Jean-Luc Danger. It is therefore impossible to redirect the sub-program through a software attack, since the starting and ending points have been registered physically.

At Télécom ParisTech we are working with the spin-off Secure-IC to develop HCODE, a fully hardware-based type of protection which complements shadow stacks and is relatively non-invasive in relation to microprocessors’ current structures,” explains Jean-Luc Danger. HCODE ensures that the jumps between the program and sub-program are integrated and associates a digital signature (or hash value) with each series of instructions in the program. Together, all the signatures and expected jumps form reference data or metadata, which is stored on the HCODE hardware model, added to the processor. By protecting the integrity of the jumps and series of instructions in this way, HCODE makes it possible to resist both software attacks and physical injection-of-fault attacks. If an error is injected into a series of instructions or a single instruction, an alarm will be triggered and will prevent the attack.

The idea is that this hardware can be added to any type of processor, without changing the core,” Jean-Luc Danger explains. These hardware security measures are impossible to modify through a software attack and are also faster at detecting intrusions. So what are the next steps for the researcher, his team and secure-IC? “Validate the concept in several types of processors and refine it, reduce the memory required for storing the signatures, and then develop it on an industrial scale.” Perhaps this will allow our software to rest easy…

Also read on I’MTech Secure-IC: protecting electronic circuits aganinst physical attacks

 

hardware attacks

Hardware attacks, a lingering threat for connected objects

Viruses, malware, spyware and other digital pathologies are not the only way computer systems’ vulnerabilities are exploited. Hardware attacks are not as well-known as these software attacks, but they are just as dangerous. They involve directly exploiting interaction with a system’s electronic components. These sneak attacks are particularly effective against connected objects. Jean-Max Dutertre’s team at Mines Saint-Étienne is committed to developing countermeasures as protection from these attacks.

[divider style=”normal” top=”20″ bottom=”20″]

This article is part of our series on Cybersecurity: new times, new challenges.

[divider style=”normal” top=”20″ bottom=”20″]

 

Enguerrand said: “Ok Google, turn on the light!” and there was light in the living room of his apartment. Voice commands are one of the sales advantages of Philips Hue connected light bulbs. They also offer the possibility of changing the room’s colors and scheduling lighting settings. Yet along with these benefits comes the setback of heightened vulnerability to cyber-attacks. In May 2017, a team of Israeli researchers presented various loopholes in these bulbs at a conference in San Jose, California. The researchers presented a typical case of an attack on connected objects used in everyday life. According to Jean-Max Dutertre, a researcher in the security of electronic systems at Mines Saint-Étienne, “this work clearly illustrates the Internet of Things’ vulnerabilities to hardware attacks.

Cyber threats are often thought of as being limited to viruses and malware. Yet hardware attacks also pose a significant threat to the security of connected objects. “This type of attack, unlike software attacks, targets the hardware component of electronic systems, like the circuits,” the researcher explains. In the case of Philips Hue lamps, the scientists carried out an attack by observation when a bulb was being updated. When the lamp’s microcontroller receives the data packets, it must handle a heavy load. “The Israeli team observed the power this part consumes,” Jean-Max Dutertre explains. Yet this consumption is correlated with data processing. By analyzing the microcontroller’s power variations based on the data it receives, they were able to deduce the cryptographic key protecting the update. Once this was obtained, they used it to spread a modified version of the update to the other bulbs in the series by using the same key and successfully control them.

In a non-protected electronic circuit, this type of attack works every time,” says the researcher from Mines Saint-Étienne, who is working to counter this type of attack. But hardware security is often forgotten. Either inadvertently or due to ignorance, many companies put all the emphasis on protecting software using cryptography. “The mathematical algorithms are secure in this respect,” Jean-Max Dutertre admits, “but once it has become possible to access the hardware, to observe how it reacts when the system processes information, this security is compromised, because the cryptographic keys can be deduced.”

And this involves many risks, even for seemingly insignificant connected objects. In sending a false update to the connected light bulb, the light bulb can then send back information to the attacker which includes the user’s personal information. Knowing when a light bulb is lit during the day reveals when a person is home, and consequently when they are absent. This information can then be used to plan a burglary. An attacker can also cause the manufacturer economic losses by making an entire series of connected bulbs unusable. Finally, it is also possible to make many light bulbs send requests to the manufacturer’s sites, thus saturating the servers, which can no longer respond to real requests—this is referred to as a denial of service attack. This causes economic problems for the company, but also negatively affects the real users, whose requests, sent by their light bulbs, cannot be processed. This therefore reduces the quality of service.

Fighting hardware attacks

What measures should then be taken to prevent these hardware attacks? At Mines Saint-Étienne, Jean-Max Dutertre and his team are first working to master the different types of attacks, developing an in-depth understanding in order to provide better protection. In addition to the attack by observation, which involves watching how the hardware reacts, there is also the attack by disturbance. In this second case, the attacker voluntarily disrupts the hardware while it is processing data. “A quick disturbance of the power supply of the integrated circuit or even the laser illumination of its silicon die will change a bit or byte of the data it computes,” the researcher explains. The hardware’s reaction when it processes the modified data can then be compared to its reaction when it processes unaltered data. This difference again makes it possible to determine the encryption keys and access the sent information.

 

In the laboratories at Mines Saint-Étienne, the researchers use lasers to inject faults into the electronic systems. This provides them with a better understanding of their behavior in the event of a hardware attack by disturbance.

 

There are several countermeasures for these two kinds of attacks. The first and main category involves preventing the statistical processing of the device’s operating data which the attacker uses to deduce the key. “For example, we can desynchronize the calculations that a connected object will make when it receives data,” Jean-Max Dutertre explains. In short, this means running codes and calculation operations in a way that delays or staggers them. This makes it harder to understand which task is linked to a more significant activity performed by the connected object’s processors. Another possibility is hiding the data, performing calculations on hidden data and revealing them once the operation is completed. The attacker can therefore not obtain any information on the hidden data and cannot gain access to the real data. The second countermeasure category involves changing the hardware directly. “In this case, we directly modify the circuit of the connected object, by adding sensors for example,” explains Jean-Max Dutertre. “These sensors make it possible to identify disturbance attacks by detecting a laser signal or a change in the feed.”

Major component manufacturers and connected object manufacturers are increasingly beginning to consider these countermeasures. For example, the Mines Saint-Étienne team works in partnership with STMicroelectronics to improve the security of the circuits. However, smaller companies are not always informed about hardware faults and have even less knowledge of solutions. “Many startups do not know about these types of attacks,” the researcher observes. Yet they represent a large community among connected object manufacturers. In Europe, regulations on cybersecurity are in the process of changing. Since 9 May, all the Member States of the European Union must implement the directive on the security of networks and information systems. This provides in particular for the strengthening of national capacities in cybersecurity and better cooperation between the Member States. In this new context promoting technical advances, the way these hardware security threats are considered is likely to improve.

In the meantime, the researchers at Mines Saint-Étienne are continuing to develop new measures to fight against these attacks. “It is quite fun,” says Jean-Max Dutertre with a smile. “We set up methods of defense, and we test them by trying to get past them. When we succeed, we must find new means for protecting us from ourselves.” It’s a little like playing yourself in chess. Yet the researcher recognizes the importance of scientific collaboration in this task. “We need to remain humble: sometimes we think we have found a strong defense, and another team of researchers succeeds in getting round it.” This international teamwork makes it possible to remain on the cutting edge and address the vulnerabilities of even the most powerful technology.