cybersecurity

Cybersecurity: new times, new challenges

Editorial.

Who am I? A white man, almost 30. I wear hoodies and hack websites belonging to prestigious organizations like the CIA from my parents’ basement. Above all, I am thick-skinned. Have you guessed? I am, of course, a stereotypical hacker!

Movies and TV series continue to propagate this false and dated image. But due to changes in the internet, practices and infrastructures supported by the network, the threats we are facing are no longer the same as those in the late 20th century. The means used to attack organizations continue to grow, progressively revealing the absurdity of the stereotypical isolated attacker motivated by dark intentions rather than profit.

This series of articles aims to highlight a few iconic examples of the new cybersecurity challenges, and at the top of the list is: protecting the Internet of Things. Sensors are becoming increasingly prevalent in home automation, sports and fashion, and with them come new threats for potential attacks. Jean-Max Dutertre, a researcher at Mines Saint-Étienne, describes the risks these connected objects present and the solutions being implemented. In a second article, Jean-Luc Danger, researcher at Télécom ParisTech, expands on the list of solutions to these new threats.

[one_half]

[/one_half][one_half_last]

[/one_half_last]

“New challenges” does not only refer to new sectors or new systems in need of protection. With the growth of digital solutions in traditional fields, cybersecurity must also be developed in areas where it has long been considered of secondary importance. This is the work of Yvon Kermarrec at IMT Atlantique. As a member of the Research Chair for the Cyberdefense of Naval Systems, he explains why ships, and the entire marine sector, must tackle this issue head on.

The same is true for telephony, a sector which has long been affected by crime, but has benefited from relative indifference in terms of large-scale fraud. Impersonation to extort money, call forwarding schemes and telemarketing abuses are widespread. Aurélien Francillon, a researcher at Eurecom, is seeking first of all to better understand the organization of the participants and the structural causes of fraud. His findings are proving useful in the search for defense strategies.

Thanks to all the research efforts in the area of cybersecurity, the power relationship between attackers and defenders has become more balanced than it was a few years or decades ago. Organizations are increasingly well prepared and able to respond to cyber-attacks. In conclusion, Frédéric Cuppens (IMT Atlantique) and Hervé Debar (Télécom SudParis), both members of the Cybersecurity and Critical Infrastructures Chair at IMT, take a look at this topic. They review the latest technical solutions and strategic approaches that offer protection from even the most insidious threats.

[divider style=”dotted” top=”20″ bottom=”20″]

Since the topic of cybersecurity is so vast and complex, this series of articles—like all of our I’MTech articles—does not attempt to be exhaustive.  To further explore this topic, we recommend this list of articles from our archives:

[one_third]

[/one_third][one_third]

[/one_third][one_third_last]

[/one_third_last]

[one_half]

[/one_half][one_half_last]

[/one_half_last]

[one_half]

[/one_half][one_half_last]

[/one_half_last]

 

cyberdefense

Cyberdefense seeks to regain control

Between attackers and defenders, who is in the lead? In cybersecurity, the attackers have long been viewed as the default winners. Yet infrastructures are becoming better and better at protecting themselves. Although much remains to be done, things are not as imbalanced as they might seem, and research is providing valid cyberdefense solutions to help ensure the security and resilience of computer systems.

[divider style=”normal” top=”20″ bottom=”20″]

This article is part of our series on Cybersecurity: new times, new challenges.

[divider style=”normal” top=”20″ bottom=”20″]

 

At the beginning of a chess game, the white pieces are always in control. This is a cruel reality for the black pieces, who are often forced to be on the defensive during the first few minutes of the game. The same is true in the world of cybersecurity: the attacker, like the player with the white chess pieces, makes the first blow. “He makes his choices, and the defender must follow his strategy which puts him in a situation of inferiority by default,” observes Frédéric Cuppens, a cybersecurity researcher at IMT Atlantique. This reality is what defines the strategies adopted by companies and software publishers, with the difference that unlike the black chess pieces, the cyber-defenders cannot counter-attack. It is illegal for an individual or an organization to respond by attacking the attacker with a cyber-attack. In this context, the defense plan can be broken down into three parts: protecting oneself to limit the attacker’s initiative, deploying defenses to prevent him from reaching his goal and, above all, ensuring the resilience of the system if the attacker succeeds in his aim.

This last possibility must not be overlooked. “In the past, researchers quickly realized that they would not succeed in preventing every attack,” recalls Hervé Debar, a researcher in cybersecurity at Télécom SudParis. From a technical point of view, “it is impossible to predict all the potential paths of attack,” he explains. And from an economic and organizational perspective, blocking all the doors using preventive measures or in the event of an attack makes the systems inoperable. It is therefore often more advantageous to accept to undergo an attack, cope with it and end it, than it is to attempt to prevent it at all costs. Hervé Debar and Frederic Cuppens, members of the Cybersecurity and critical infrastructures Chair at IMT (see box at the end of the article), are very familiar with this arbitration. For nuclear power plants for example, shutting everything down to prevent a threat is unthinkable.

Reducing the initiative

Despite these obstacles facing cyber-defense, the field is not lagging behind. Institutional and private organizations’ technical and financial means are generally greater than those of cybercriminals — with the exception of computer attacks on governments. Once a new flaw has been discovered, the information spreads quickly. “Software publishers react quickly to block these entryways, things travel very fast,” says Frédéric Cuppens. The National Vulnerabilty Database (NVD), is an American database that lists all known vulnerabilities and serves as a reference for cyber-defense experts. Beginning in 1995 with a few hundred flaws, it now records thousands of new entries each year, with 15,000 in 2017. This just shows how important it is to share information to allow the fastest possible response and shows the involvement of communities of experts in this collective strategy.

But it’s not enough,” warns Frédéric Cuppens. “These databases are crucial, but they always come after the event,” he explains.  To reduce the attacker’s initiative when it is made, the attack must be detected as soon as possible. The best way to achieve this is to adopt a behavioral approach. To do so, data is collected to analyze how the systems function normally. Once this learning step is completed, the system’s “natural” behavior is known. When a deviation occurs in the operations in relation to this baseline, it is considered a potential danger sign. “In the case of attacks on websites, for example, we can detect the injection of unusual commands that betrays the attacker’s actions,” explains Hervé Debar.

Connected objects, like surveillance cameras, are a new target for attackers. Cyber-defense must therefore be adapted accordingly.

 

The boom of the Internet of Things has brought about a change of scale that reduces the effectiveness of this behavioral approach, since it is impossible to monitor each object individually. In 2016, the botnet Mirai took control of connected surveillance cameras and used them to send requests to servers belonging to the DNS provider, Dyn. The outcome: websites belonging to Netflix, Twitter, Paypal and other major Internet entities whose domain names were managed by the company became inaccessible. “Mirai has changed things a little,” admits the researcher from Télécom SudParis. Protection must be adapted in the light of the vulnerabilities created by connected objects. It is inconceivable to monitor whole fleets of cameras in the same way we would monitor a website. It is too expensive, too difficult. “We therefore protect the communication between objects rather than the objects themselves and we use certain objects as sentinels,” explains Hervé Debar. Since an attack on cameras affects them all, it is only necessary to protect and observe a few to detect a general attack. Through collaboration with Airbus, the researcher and his team have shown that monitoring 2 industrial sensors out of a group of 20 was sufficient for detecting an attack.

Ensuring resilience

Once the attack has been detected, everything must be done to ensure it has the least possible impact. “To thwart the attacker and seize the initiative, a good solution is to use the moving target defense,” Frédéric Cuppens explains. This technique is relatively new: the first academic research on the subject was conducted in the early 2010s, and the practice has started to spread among companies for around two years now.  It involves moving the targets of the attack to safer locations. It is easy to understand this cyber-defense strategy in the area of cloud computing. The functions of a virtual machine that is attacked can be moved to another machine that was unaffected. The service therefore continues to be provided. “With this means of defense, all that is needed is for the IP addresses and routing functions to be dynamically reconfigured,” the IMT Atlantique researcher explains. In essence, this means directing all the traffic related to a department through a clone of the attacked system. This technique is a major asset, especially for those facing particularly vicious attacks. “The systems’ resilience is paramount in protecting against polymorphic malware, which changes as it spreads,” says Frédéric Cuppens.

While the moving target strategy is effective for dematerialized systems, it is more difficult to deploy in physical infrastructures. Connected robots in a production line and fleets of connected objects cannot be cloned and moved as needed. For these systems, functional diversification is used instead. “For equipment that runs on specific hardware with specific software, the company can reproduce the system, but with different hardware and software,” explains Frédéric Cuppens. Since the security vulnerabilities are inherent in the given hardware or software, if one is attacked, the other one should still remain functional.  But this protection represents an added cost. In response to this argument, the researcher replies that the safety of infrastructures and individuals is at stake. “For an airplane, we are all willing to accept that 80% of the cost be dedicated to safety, because so many lives are at stake. The same is true for infrastructures: losing control of certain types of equipment can cost lives, whether it be the employees nearby or that of citizens in extreme cases involving critical infrastructure.”

How can cyber-defense be further developed?

People’s attitudes about cybersecurity must change. This is one of the major challenges we must address to further limit what attackers can achieve. One of the keys is to convince companies of the importance of investing in this area. Another is disseminating good practices. “Today we know which uses ensure better security right from the development stage for software and systems,” says Hervé Debar. “Using languages that are more robust than others, tools that can check how programs are written, proven libraries for certain functions, secure programming patterns, making test plans…” Yet these practices are far from routine for developers. All too often, their objective is to offer an application or functional system as quickly as possible, to the detriment of security and robustness.

It is now critical that this paradigm be revised. The use of artificial intelligence raises many questions. While it offers the potential for designing dynamic solutions for detecting intrusions, it also opens the door to new threats. “The principle behind using AI is that it adapts to situations,” explains Frédéric Cuppens. “But if systems are constantly adapting, how will it be possible to determine ahead of time if the changes are caused by AI or an attack?” To prevent cybercriminals from taking advantage of this gray area, the systems’ security must be guaranteed and the way they operate must be transparent. Yet today, these two dimensions are far from being at the forefront of most developers’ minds. “The security of connected objects is being left behind,” says Frédéric Cuppens. And it goes further than the question of processes: “being careful about what we do with information technology is a state of mind,” Hervé Debar adds.

[box type=”info” align=”” class=”” width=””]

A chair for protecting critical infrastructure

Telecommunications, defense, energy… These sectors are vital for the proper functioning of the country. In the event of a breakdown, essential services are no longer provided to citizens. With the rise in connected objects within these critical infrastructures, the risk of cyberattacks is increasing. New cyber-defense programs must therefore be developed to protect them.

This is the whole purpose of the IMT Cybersecurity of Critical Infrastructures Chair. It brings together researchers from IMT Atlantique, Télécom SudParis and Télécom Paristech to focus on the issue. The scientists work in close collaboration with industry stakeholders affected by these issues. Airbus, Orange, EDF, La Poste, BNP Paribas, Amossys and Société Générale are all partners of this Chair. They provide the researchers with real-life cases of risks and systems that must be protected to improve the current state of cyber-defense.

Find out more

[/box]

fraud

Fraud on the line

An unsolicited call is not necessarily from an unwelcome salesman. It is sometimes a case of a fraud attempt. The telephone network is home to many attacks and most are aimed at making a profit. These little-known types of fraud are difficult to recognize and difficult to fight.

[divider style=”normal” top=”20″ bottom=”20″]

This article is part of our series on Cybersecurity: new times, new challenges.

[divider style=”normal” top=”20″ bottom=”20″]

 

How many unwanted phone calls have you received this month? Probably more than you’d like. Some of these calls are undoubtedly from companies you have signed a contract with. This is the case with your telephone operator which regularly assesses its customer satisfaction. Other calls are from telemarketing companies that acquire phone numbers through legal trade agreements. Energy suppliers in particular buy lists of contacts in their search for new customers. On the other hand, some calls are completely fraudulent. Ping calls, for example, are made by calling a number for a few seconds to leave a missed call on the recipient’s telephone. The recipient who returns the call is then forwarded to a premium-rate number. The call is often redirected to a foreign location, an expensive destination. The service the caller connects with then offers them to sign up for a contract in exchange for a subsequent payment. The International Telecommunications Union sees these “cash-back” schemes as abusive. “Some calls are also made by robots which scan lists of numbers to identify if they belong to individuals or companies,” Aurélien Francillon explains. As an expert in network security, this researcher at EURECOM works specifically on the issue of telephone fraud.

Although the scientific community is tackling this issue head on, the topic is more extensive than it first appears. Individuals are not the only victims; companies are also vulnerable to these types of attacks. In order to direct external calls and manage calls between internal telephone lines, companies must establish complex telephone systems. “Companies have telephone exchanges which ensure all the private telephones are interconnected,” explains Aurélien Francillon. These exchanges are called PABX units (private automatic branch exchange) and they typically enable functions such as “do not disturb” and “call forwarding”. However, attackers can take control of these exchanges. “Scammers often carry out a PABX attack over the weekend when the employees are not present,” the researcher explains. “Again, they will call international, premium-rate numbers using the controlled lines to make money.” The attack is only detected a few days later. In the meantime, the cybercriminals have potentially made thousands of euros.

Taxonomy of fraud

The first challenge in the fight against such attacks is understanding the motivations and procedures. The same technique, such as PABX hacking, is not necessarily used by all scammers for the same purposes. And several means can be used for the same objective, such as extortion. To make sense of the issue, the researchers at EURECOM developed a taxonomy of fraud. “It is a grid which brings together all the knowledge on the attacks to better classify them and understand the patterns scammers use,” Aurélien Francillon explains. This is a big contribution to the scientific community. Until now there had not been a global and comprehensive vision of the topic of telephone network security. To create this taxonomy, the researcher and a PhD student, Merve Sahin, first needed to define fraud. They believe it is a “means of obtaining illegitimate profit by using a specific technique made possible by a weakness in a system, which is itself due to root causes.” It was in listing these deeper, root causes behind the flaws as well as the resulting weaknesses and techniques they enabled that the researchers succeeded in creating a complete taxonomy in the form of a grid.

 

Nearly 150 years of technological developments make studying flaws in the telephone network a complex matter.

 

Thanks to a better understanding of the threats, the scientists can now consider defense strategies. This is the case for “over-the-top” or OTT bypass fraud, which involves diverting a call to make a profit. When a user wants to call someone in a foreign country, the operator delegates the routing of the call to transit operators. These operators transport the call over continents or oceans using their cables. To optimize rates, these operators can themselves delegate the call routing to local operators who can transfer the calls for a country, for example. “Each operator involved in the routing will recover part of the income from the call. Each party’s goal is to sell their routing service to the operators for which they will route the calls at a higher price than they will pay the next operators which will take care of terminating the calls.” For one call, over a dozen stakeholders can be involved, without the caller’s operator necessarily knowing it. “This leads to gray areas that leave room for little-known stakeholders whose legitimacy and honesty cannot necessarily be ensured. Among them are inexpensive, malicious operators who, at the end of the chain, will redirect the call to a vocal chat application on the recipient’s mobile phone. The operator of the person receiving the call therefore does not receive any income on this call, which is why it is called “over-the-top” (OTT), since it passes over the stakeholders who are supposed to be involved.

Because so many stakeholders are involved in this type of fraud, the taxonomy helps to identify the relationships between them. “We had to understand the international routing system, and the billing process between the operators, which works somewhat like a stock market,” Aurélien Francillon explains. After studying the stakeholders and mechanisms, the researchers were able to reflect on how cases of fraudulent routing could be detected… And they realized that there is currently no technical solution to prevent them. It is virtually impossible to determine if a voice stream on IP that arrives at a mobile application — sometimes encrypted and in proprietary formats — is from a classic telephone call (and there has been a bypass) or from a free call made using an application (in which case there is no bypass).  OTT bypass fraud is therefore hard to detect. However, this does not mean that no solutions exists, rather it is necessary to look to other solutions that are not technical ones. “The most relevant solutions on this type of subject are more economic or legal,” the researcher admits. “However, we believe it is crucial to study these phenomena in a scientific manner in order to make the right decisions, even if they are not purely technical.”

Using legal means if necessary

Adapting legislation to prevent these abuses is another approach that has already been used in other cases. To prevent too many spam phone calls, French law provided for the establishment of a list for opposing cold sales calls. Since 1 June 2016, all consumers have the right to sign up to be added to the Bloctel list. The service ensures that the lists telemarketers provide are cleaned before the next telemarketing campaign. To estimate the impact of such a measure, Aurélien Francillon and his team have done some preliminary work at a European level. They signed 800 numbers up for the block lists of 8 countries in Europe. The objective was to comparatively assess the effectiveness of the lists between countries. Sometimes, the lists truly reduce the number of unwanted calls received, and this is the case for France in particular. However, they sometimes have the opposite effect. In England, the block lists are sent to the telemarketing companies so that they themselves remove the numbers of individuals who have expressed their opposition to telemarketing. “Obviously, some companies are not playing along,” Aurélien Francillon observes. “We even observed a case in which only the numbers on the block list had been called, suggesting that it had been used to target people.”

This is one of the limitations of the legal approach: just like the technical solution, it depends on how it is implemented. If it is well done, it can prove to be a good alternative when technology does not offer a solution to such complex problems. In the Bloctel example, the solution also relies heavily on information from consumers. “It is important to give them a comprehensive vision of the problem so that they can understand what can be hidden behind a sales call or a missed call on their mobile,” the researcher insists. Thanks to the taxonomy that EURECOM has developed, we expect that the scientific community will be able to better describe and study all cases of potential fraud. Before looking for solutions, this will first help effectively inform individuals and companies about the risks on the line.

[box type=”shadow” align=”” class=”” width=””]

A robot fighting spam calls?

To help combat unwanted calls, developers have designed Lenny, a conversational robot intended to make telemarketers waste as much time as possible. With the voice of an elderly person, he regularly asks to have questions repeated, or begins telling a story about his daughter that is completely unrelated to the telemarketer’s question. Unlike the Alexia and Google Home assistants, Lenny’s interactions are simply prerecorded and are looped in the conversation.

Researchers at EURECOM and Télécom ParisTech have studied what makes this robot work so well — it has made some sellers waste 40 minutes of their time! Based on 200 conversations recorded with Lenny, they classified different types of calls: requests for funding for political parties, spam, aggressive or polite sellers, etc. The feature that makes Lenny so effective is that he appears to have been designed based on the results of conversational analysis, with initial results dating back to the 1970s. It identifies the silences in conversations which triggers prerecorded sequences. His responses are particularly credible because they include changes in intonation and turning points in the conversation.

[/box]

ships

Protecting ships against modern-day pirates

Cybersecurity, long viewed as a secondary concern for naval systems, has become increasingly important in recent years. Ships can no longer be seen as isolated objects at sea, naturally protected from cyber-attacks. Yvon Kermarrec, a researcher in computer science at IMT Atlantique, leads a research chair on cybersecurity in partnership with the French Naval School, Thales and Naval Group. He explains the cyber-risks affecting military and civil ships and the approaches used to prevent them.

[divider style=”normal” top=”20″ bottom=”20″]

This article is part of our series on Cybersecurity: new times, new challenges.

[divider style=”normal” top=”20″ bottom=”20″]

 

Are ships just as vulnerable to cyber-attacks as other systems?

Yvon Kermarrec: Ships are no longer isolated, even in the middle of the Atlantic Ocean. Commercial boats frequently communicate with ports, the authorities and weather centers, for example. They must remain informed about currents, weather conditions, the position of icebergs etc. They must also inform shipyard owners and ports of arrival when they are running late. For military ships, there is also the communication required for coordinating operations with other naval ships, fighter planes, control centers, etc. This means there are several data streams flowing to and from a given boat. And these streams are just as vulnerable as those connecting an intelligent car with its infrastructure or a computer to its network.

What are the cyber security risks for a boat?

YK: There are many different types of cyber-attack. For example, a military frigate’s combat system software features several millions of lines of code, not to mention the management and control systems for the boat itself, and for its engines and steering in particular. Possible attacks can involve the software pirate altering the digital maps used for navigation or the GPS data, causing the captain to misjudge the boat’s position. This could cause the boat to be lost at sea or run aground by steering it towards a shelf or reef. This is a technique that can be used by pirates who want to take the goods in a container ship. It is also possible to attack the engine controls and perform maneuvers at full speed which would destroy the propulsion system and cause the boat to drift. Finally, we could also imagine someone causing the door of a ferry to open while it is in open water, leading to a major leak and possibly shipwreck.

What do cyber-attacks on boats look like?

YK: They can be generic, meaning that they are generic attacks that impact the boat without specifically targeting it. This could, for example, involve email phishing that lures an employee or tourist on board into opening an email containing a virus. This virus would then spread to the boat’s computer system. There is nothing to prevent ransomware on a boat, which would lock all the computers on board and require a ransom in exchange for unlocking the system. Attacks can also be specific. A criminal can find a way to install spyware on board or convince a systems operator on board to do so. This software would then be able to spy on and control the equipment and transmit sensitive information, such as the boat’s position and actions. The installation of this type of software on a military vessel poses very serious security concerns.

How can all of these attacks be detected?

YK: This is the big challenge we’re facing, and it is what drives part of our research. The goal is to detect anomalies which do not normally occur. We look for early warning signs of attacks. A message that an outer door will open while a ferry is cruising is not normal. To anticipate a door opening, we try to detect the orders that must occur before the action occurs and analyze the context of these actions.

Once the attack has been detected, what strategy is taken?

YK: The first step is to accept that the system will be attacked. Then we must measure the impact to assess the situation and prevent everything from shutting down. This part is called cyber-resiliency: working to ensure that as many functions as possible are maintained to limit the consequences of the attack.  If the navigation system is affected, such as the GPS information, the captain must be able to shut down this part of the system and maintain the steering commands. It is certainly inconvenient to operate without the GPS, but there is always the possibility of getting out a map while the navigation system is being restarted or reinstalled. In the case of an action on the external door of a ship out at sea, he will decide to shut down the entire control system for opening the doors. And if it is necessary to open or close other doors, the teams on board can do so manually. It is a tedious procedure, but a much better alternative to dealing with a leak. Research on the subject of detection and action during attacks focuses on finding the necessary means for isolating the various systems and ensuring that the attacks do not spread from one system to another. At the same time, we are also working on defense systems using cryptographic mechanisms. In this case, we are confronted with familiar problems and strategies, similar to those used to protect the Internet of Things.

Ultimately this is all very similar to what is done for communicating cars. But are these problems taken into consideration in the marine setting?

YK: They are beginning to make headway, but for a long time boats were seen as little isolated factories out at sea which were not vulnerable to computer attacks. One of the major challenges is therefore to raise awareness. Sailors often see cybersecurity as a constraint. What they see are the actions they are not able to perform. Yet we all know the limitations of models which ban things without explanation… The sailors and all those involved could potentially be affected both individually and collectively.

What is being done to raise awareness about these issues in the marine setting?

YK: We are addressing this issue in the context of the Chair of Cyber Defense for Naval Systems, which includes the French Naval School and IMT Atlantique. For the Naval School, we developed a cyber-security curriculum for cadet officers. We presented concrete case studies and practical assignments on platforms developed by the Chair’s PhD students. In our discussions with businesses, we now see that ship-owners are taking cyber-risks very seriously. On a global level, the National Cybersecurity Agency of France (ANSSI) and the International Maritime Organization (IMO) are working to address the cybersecurity of ships and port infrastructures. They are therefore responding to the growing concerns in the civil maritime sector. Interest in the subject has greatly increased due to current events and threats that have materialized. Currently, IT security risks for ships are taken very seriously because they could greatly impact international trade and the environment. After all, this maritime context is where the term “pirate” first emerged. There are still considerable challenges and issues at stake for individuals and nations.

 

cyber-attacks

Using hardware to defend software against cyber-attacks

Software applications are vulnerable to remote attacks via the internet or local networks and are cyber-attackers’ target of choice. While methods combining hardware and software have already been integrated into the most recent processors to prevent cyber-attacks, solutions based solely on hardware, which by definition cannot be remotely attacked, could soon help defend our computer programs. Jean-Luc Danger, a researcher at Télécom ParisTech, works on these devices that are invulnerable to software attacks.

[divider style=”normal” top=”20″ bottom=”20″]

This article is part of our series on Cybersecurity: new times, new challenges.

[divider style=”normal” top=”20″ bottom=”20″]

 

Nothing and no one is infallible, and software is no exception. In practice, it is very difficult to design a computer program without flaws. Software is therefore on the frontline of cyber-attacks. This is especially true since software attacks, unlike hardware attacks locally targeting computer hardware, can be carried out remotely using a USB port or internet network. These cyber-attacks can affect both individuals and companies. “Compared to software attacks, hardware attacks are much more difficult to carry out. They require the attacker to be in the vicinity of the targeted machine, and then to observe and disrupt its operation,” explains Jean-Luc Danger, a researcher in digital and electronic systems at Télécom ParisTech.

So what are the solutions for protecting software? “We intuitively know that if we design antivirus software to protect against software attacks, this protective software can itself be the victim of an attack,” says Jean-Luc Danger. Hardware protection devices, which cannot be remotely attacked, could therefore offer an effective solution for improving the cybersecurity of computer programs from the threat of software attacks.

Hybrid methods for countering attacks

Software, or a computer program, is a series of instructions performed sequentially. A program can therefore be presented as a graph, in which each node represents an instruction. This flowchart, or control flow graph, helps ensure that the series of nodes are correctly performed.

Each point symbolizes an instruction, and the arrows which connect them represent
the sequence for executing these instructions.

During a cyber-attack, the attacker identifies a flaw in order to inject code and integrate invalid instructions, or use the existing code to change the execution sequence of the instructions. The malicious code can for example allow the attacker to access the targeted system’s memory.

supports matériels attaques logicielles

Here, a code injection attack, with the execution of an invalid series of instructions.

supports matériels attaques logicielles

In the case of a code reinjection attack, the flow of the execution of tasks is altered.

There are many different methods for protecting software: antiviruses, which detect infected programs, solutions for making the code unreadable, programs that check the integrity of the control flow graph…” Jean-Luc Danger explains. Some hybrid solutions, involving both hardware and software, are already integrated into current processors. For example, memory management units enable each program to have a dedicated memory range protected by a virtual address, which limits the damage in the event of a cyber-attack. In addition, processors can be equipped with a virtualization device on which a virtual machine can be installed with its own operating system. If the first system suffers a software attack, the virtual machine could take over.

Although these are hardware solutions, since they involve the processor being physically altered, they require a minimum amount of configuration and software writing in order to work, which makes them vulnerable to software attacks. However, hardware-only solutions are being developed to protect from cyber-attacks.

Stacks of memory “plates” and digital signatures

There is already a fully hardware-based solution that will soon be integrated into Intel processors: shadow stacks. These stacks of hardware offer an interesting solution for preventing cyber-attacks targeting the sub-programs,” Jean-Luc Danger explains.

Within a program, an instruction can refer to another series of instructions, which then represents a sub-program. Once the sub-program has run, the data is sent back to the initial starting point to be reintegrated into the main program’s chain of instructions: this is the critical moment, when a cyber-attack can divert the data and send it to an abnormal instruction.

supports matériels attaques logicielles

In the event that a sub-program is run
(here, the chain consists of two nodes, to the right of the main program),
the data can be sent back to the wrong instruction during a cyber-attack.

The shadow stack is intended to stop this type of attack. This “stack” mechanism, in which the memory is physically stored in the form of “stacked plates”, leaves a physical trail of the nodes travelling to and from the sub-program. “The node’s address is ‘stacked’ on the first ‘plate’ when the sub-program runs and is ‘unstacked’ once the operation is complete,” explains Jean-Luc Danger. It is therefore impossible to redirect the sub-program through a software attack, since the starting and ending points have been registered physically.

At Télécom ParisTech we are working with the spin-off Secure-IC to develop HCODE, a fully hardware-based type of protection which complements shadow stacks and is relatively non-invasive in relation to microprocessors’ current structures,” explains Jean-Luc Danger. HCODE ensures that the jumps between the program and sub-program are integrated and associates a digital signature (or hash value) with each series of instructions in the program. Together, all the signatures and expected jumps form reference data or metadata, which is stored on the HCODE hardware model, added to the processor. By protecting the integrity of the jumps and series of instructions in this way, HCODE makes it possible to resist both software attacks and physical injection-of-fault attacks. If an error is injected into a series of instructions or a single instruction, an alarm will be triggered and will prevent the attack.

The idea is that this hardware can be added to any type of processor, without changing the core,” Jean-Luc Danger explains. These hardware security measures are impossible to modify through a software attack and are also faster at detecting intrusions. So what are the next steps for the researcher, his team and secure-IC? “Validate the concept in several types of processors and refine it, reduce the memory required for storing the signatures, and then develop it on an industrial scale.” Perhaps this will allow our software to rest easy…

Also read on I’MTech Secure-IC: protecting electronic circuits aganinst physical attacks

 

hardware attacks

Hardware attacks, a lingering threat for connected objects

Viruses, malware, spyware and other digital pathologies are not the only way computer systems’ vulnerabilities are exploited. Hardware attacks are not as well-known as these software attacks, but they are just as dangerous. They involve directly exploiting interaction with a system’s electronic components. These sneak attacks are particularly effective against connected objects. Jean-Max Dutertre’s team at Mines Saint-Étienne is committed to developing countermeasures as protection from these attacks.

[divider style=”normal” top=”20″ bottom=”20″]

This article is part of our series on Cybersecurity: new times, new challenges.

[divider style=”normal” top=”20″ bottom=”20″]

 

Enguerrand said: “Ok Google, turn on the light!” and there was light in the living room of his apartment. Voice commands are one of the sales advantages of Philips Hue connected light bulbs. They also offer the possibility of changing the room’s colors and scheduling lighting settings. Yet along with these benefits comes the setback of heightened vulnerability to cyber-attacks. In May 2017, a team of Israeli researchers presented various loopholes in these bulbs at a conference in San Jose, California. The researchers presented a typical case of an attack on connected objects used in everyday life. According to Jean-Max Dutertre, a researcher in the security of electronic systems at Mines Saint-Étienne, “this work clearly illustrates the Internet of Things’ vulnerabilities to hardware attacks.

Cyber threats are often thought of as being limited to viruses and malware. Yet hardware attacks also pose a significant threat to the security of connected objects. “This type of attack, unlike software attacks, targets the hardware component of electronic systems, like the circuits,” the researcher explains. In the case of Philips Hue lamps, the scientists carried out an attack by observation when a bulb was being updated. When the lamp’s microcontroller receives the data packets, it must handle a heavy load. “The Israeli team observed the power this part consumes,” Jean-Max Dutertre explains. Yet this consumption is correlated with data processing. By analyzing the microcontroller’s power variations based on the data it receives, they were able to deduce the cryptographic key protecting the update. Once this was obtained, they used it to spread a modified version of the update to the other bulbs in the series by using the same key and successfully control them.

In a non-protected electronic circuit, this type of attack works every time,” says the researcher from Mines Saint-Étienne, who is working to counter this type of attack. But hardware security is often forgotten. Either inadvertently or due to ignorance, many companies put all the emphasis on protecting software using cryptography. “The mathematical algorithms are secure in this respect,” Jean-Max Dutertre admits, “but once it has become possible to access the hardware, to observe how it reacts when the system processes information, this security is compromised, because the cryptographic keys can be deduced.”

And this involves many risks, even for seemingly insignificant connected objects. In sending a false update to the connected light bulb, the light bulb can then send back information to the attacker which includes the user’s personal information. Knowing when a light bulb is lit during the day reveals when a person is home, and consequently when they are absent. This information can then be used to plan a burglary. An attacker can also cause the manufacturer economic losses by making an entire series of connected bulbs unusable. Finally, it is also possible to make many light bulbs send requests to the manufacturer’s sites, thus saturating the servers, which can no longer respond to real requests—this is referred to as a denial of service attack. This causes economic problems for the company, but also negatively affects the real users, whose requests, sent by their light bulbs, cannot be processed. This therefore reduces the quality of service.

Fighting hardware attacks

What measures should then be taken to prevent these hardware attacks? At Mines Saint-Étienne, Jean-Max Dutertre and his team are first working to master the different types of attacks, developing an in-depth understanding in order to provide better protection. In addition to the attack by observation, which involves watching how the hardware reacts, there is also the attack by disturbance. In this second case, the attacker voluntarily disrupts the hardware while it is processing data. “A quick disturbance of the power supply of the integrated circuit or even the laser illumination of its silicon die will change a bit or byte of the data it computes,” the researcher explains. The hardware’s reaction when it processes the modified data can then be compared to its reaction when it processes unaltered data. This difference again makes it possible to determine the encryption keys and access the sent information.

 

In the laboratories at Mines Saint-Étienne, the researchers use lasers to inject faults into the electronic systems. This provides them with a better understanding of their behavior in the event of a hardware attack by disturbance.

 

There are several countermeasures for these two kinds of attacks. The first and main category involves preventing the statistical processing of the device’s operating data which the attacker uses to deduce the key. “For example, we can desynchronize the calculations that a connected object will make when it receives data,” Jean-Max Dutertre explains. In short, this means running codes and calculation operations in a way that delays or staggers them. This makes it harder to understand which task is linked to a more significant activity performed by the connected object’s processors. Another possibility is hiding the data, performing calculations on hidden data and revealing them once the operation is completed. The attacker can therefore not obtain any information on the hidden data and cannot gain access to the real data. The second countermeasure category involves changing the hardware directly. “In this case, we directly modify the circuit of the connected object, by adding sensors for example,” explains Jean-Max Dutertre. “These sensors make it possible to identify disturbance attacks by detecting a laser signal or a change in the feed.”

Major component manufacturers and connected object manufacturers are increasingly beginning to consider these countermeasures. For example, the Mines Saint-Étienne team works in partnership with STMicroelectronics to improve the security of the circuits. However, smaller companies are not always informed about hardware faults and have even less knowledge of solutions. “Many startups do not know about these types of attacks,” the researcher observes. Yet they represent a large community among connected object manufacturers. In Europe, regulations on cybersecurity are in the process of changing. Since 9 May, all the Member States of the European Union must implement the directive on the security of networks and information systems. This provides in particular for the strengthening of national capacities in cybersecurity and better cooperation between the Member States. In this new context promoting technical advances, the way these hardware security threats are considered is likely to improve.

In the meantime, the researchers at Mines Saint-Étienne are continuing to develop new measures to fight against these attacks. “It is quite fun,” says Jean-Max Dutertre with a smile. “We set up methods of defense, and we test them by trying to get past them. When we succeed, we must find new means for protecting us from ourselves.” It’s a little like playing yourself in chess. Yet the researcher recognizes the importance of scientific collaboration in this task. “We need to remain humble: sometimes we think we have found a strong defense, and another team of researchers succeeds in getting round it.” This international teamwork makes it possible to remain on the cutting edge and address the vulnerabilities of even the most powerful technology.

 

consent, consentement

GDPR: managing consent with the blockchain?

Blockchain and GDPR: two of the most-discussed keywords in the digital sector in recent months and years. At Télécom SudParis, Maryline Laurent has decided to bring the two together. Her research focuses on using the blockchain to manage consent to personal data processing.

 

The GDPR has come into force at last! Six years have gone by since the European Commission first proposed reviewing the personal data protection rules. The European regulation, which came into force in April 2016, was closely studied by companies for over two years in order to ensure compliance by the 25 May 2018 deadline. Of the 99 articles that make up the GDPR, the seventh is especially important for customers and users of digital services. It specifies that any request for consent “must be presented in a manner which is clearly distinguishable from the other matters, in an intelligigble and easily accessible form, using clear and plain language.” Moreover, any company (known as a data controller) responsible for processing customers’ personal data “shall be able to demonstrate that consent was given by the data subject to the processing of their personal data.”

Although these principles seem straightforward, they introduce significant constraints for companies. Fulfilling both of these principles (transparency and accounting) is not an easy task. Maryline Laurent, a researcher at Télécom SudParis with network security expertise, is tackling this problem. As part of her work for IMT’s Personal Data Values and Policies Chair — of which she is the co-founder — she has worked on a solution based on the blockchain in a B2C environment1. The approach relies on smart contracts recorded in public blockchains such as Ethereum.

Maryline Laurent describes the beginning of the consent process that she and her team have designed between a customer and a service provider: “The customer contacts the company through and authenticated channel and receives a request from the service provider containing the elements of consent that shall be proportionate to the provided service.” Based on this request, customers can prepare a smart contract to specify information for which they agree to authorize data processing. “They then create this contract in the blockchain, which notifies the service provider of the arrival of a new consent,” continues the researcher. The company verifies that this corresponds to its expectations and signs the contract. In this way, the fact that the two parties have approved the contract is permanently recorded in a block of the chain. Once the customer has made sure that everything has been properly carried out, he may provide his data. All subsequent processing of this data will also be recorded in the blockchain by the service provider.

 

A smart contract approved by the Data Controller and User to record consent in the blockchain

 

A smart contract approved by the Data Controller and User to record consent in the blockchainSuch a solution allows users to understand what they have consented to. Since they write the contract themselves, they have direct control over which uses of their data they accept. The process also ensures multiple levels of security. “We have added a cryptographic dimension specific to the work we are carrying out,” explains Maryline Laurent. When the smart contract is generated, it is accompanied by some cryptographic material that makes it appear to the public as user-independent. This makes it impossible to link the customer of the service and the contract recorded in the blockchain, which protects its interests.

Furthermore, personal data is never entered directly in the blockchain. To prevent the risk of identity theft, “a hash function is applied to personal data,” says the researcher. This function calculates a fingerprint over the data that makes it impossible to trace back. This hashed data is then recorded in the register, allowing customers to monitor the processing of their data without fear of an external attack.

 

Facilitating audits

This solution is not only advantageous for customers. Companies also benefit from the use of consent based on the blockchain. Due to the transparency of public registers and the unalterable time-stamped registration that defines the blockchain, service providers can comply with the auditing need. Article 24 of the GDPR requires the data controller to “implement appropriate technical and organizational measures to ensure and be able to demonstrate that the  processing of personal data is performed in compliance with this Regulation.” In short, companies must be able to provide proof of compliance with consent requirements for their customers.

There are two types of audits,” explains Maryline Laurent. “A private audit is carried out by a third-party organization that decides to verify a service provider’s compliance with the GDPR.” In this case, the company can provide the organization with all the consent documents recorded in the blockchain, along with the associated operations. A public audit, on the other hand, is carried out to ensure that there is sufficient transparency for anyone to verify that everything appears to be in compliance from the outside. “For security reasons, of course, the public only has a partial view, but that is enough to detect major irregularities,” says the Télécom SudParis researcher. For example, any user may ensure that once he/she has revoked consent, no further processing is performed on the data concerned.

In the solution studied by the researchers, customers are relatively familiar with the use of the blockchain. They are not necessarily experts, but must nevertheless use software that allows them to interface with the public register. The team is already working on blockchain solutions in which customers would be less involved. “Our new work2 has been presented in San Francisco at the 2018 IEEE 11th Conference on Cloud Computing, which hold from 2 to 7 July 2018. It makes the customer peripheral to the process and instead involves two service providers in a B2B relationship,” explains Maryline Laurent. This system better fits a B2B relationship when a data controller outsources data to a data processor and enables consent transfer to the data processor. “Customers would no longer have any interaction with the blockchain, and would go through an intermediary that would take care of recording all the consent elements.”

Between applications for customers and those for companies, this work paves the way for using the blockchain for personal data protection. Although the GDPR has come into force, it will take several months for companies to become 100% compliant. Using the blockchain could therefore be a potential solution to consider. At Télécom SudParis, this work has contributed to “thinking about how the blockchain can be used in a new context, for the regulation,” and is backed up by the solution prototypes. Maryline Laurent’s goal is to continue this line of thinking by identifying how software can be used to automate the way GDPR is taken into account by companies.

 

1 N. Kaâniche, M. Laurent, “A blockchain-based data usage auditing architecture with enhanced privacy and availability“, The 16th IEEE International Symposium ong Network Computing and Applications, NCA 2017, ISBN: 978-1-5386-1465-5/17, Cambridge, MA USA, 30 Oct. 2017-1 Nov. 2017.

N. Kaâniche, M. Laurent, “BDUA: Blockchain-based Data Usage Auditing“, IEEE 11th Conference on Cloud Computing, San Francisco, CA, USA, 2-7 July 2018

supercritical fluid

What is a supercritical fluid?

Water, like any chemical substance, can exist in a gaseous, liquid or solid state… but that’s not all! When sufficiently heated and pressurized, it becomes a supercritical fluid, halfway between a liquid and a gas. Jacques Fages, a researcher in process engineering, biochemistry and biotechnology at IMT Mines Albi, answers our questions on these fluids which, among other things, can be used to replace polluting industrial solvents or dispose of toxic waste. 

 

What is a supercritical fluid?

Jacques Fages: A supercritical fluid is a chemical compound maintained above its critical point, which is defined by a specific temperature and pressure. The critical pressure of water, for example, is the pressure beyond which it can be heated to over 100°C without becoming a gas. Similarly, the critical temperature of CO2 is the temperature beyond which it can be pressurized without liquefying. When the critical temperature and pressure of a substance are exceeded at the same time, it enters the supercritical state. Unable to liquefy completely under the effect of temperature, but also unable to gasify completely under the effect of pressure, the substance is maintained in a physical state between a liquid and a solid: its density will be equivalent to that of a liquid, but its fluidity will be that of a gas.

For CO2, which is the most commonly used fluid in supercritical state, the critical temperature and pressure are relatively low: 31°C and 74 bars, or 73 times atmospheric pressure. Because CO2 is also an inert molecule, inexpensive, natural and non-toxic, it is used in 90% of applications. The critical point of water is much higher: 374°C and 221 bars respectively. Other molecules such as hydrocarbons can also be used, but their applications remain much more marginal due to risks of explosion and pollution.

What are the properties of supercritical CO2 and the resulting applications?

JF: Supercritical CO2 is a very good solvent because its density is similar to that of a liquid, but it has much greater fluidity – similar to that of a gas – which allows it to penetrate the micropores of a material. The supercritical fluid can selectively extract molecules, it can also be used for particle design.

A device designed for implementing extraction and micronization processes of powders.

 

Supercritical CO2 can be used to clean medical devices, such as prostheses, in addition to the sterilization methods used. It removes all the impurities to obtain a product that is clean enough to be implanted in the human body. It is a very useful complement to current methods of sterilization. In pharmacy, it allows us to improve the bioavailability of certain active principles by improving their solubility or speed of dissolution. At IMT Mines Albi, we worked on this type of process for Pierre Fabre laboratories, allowing the company to develop its own research center on supercritical fluids.

Supercritical CO2 has applications in many sectors such as materials, construction, biomedical healthcare, pharmacy and agri-food as well as the industry of flavorings, fragrances and essential oils. It can extract chemical compounds without the use of solvents, guaranteeing a high level of purity.

Can supercritical CO2 be used to replace the use of polluting solvents?

JF: Yes, supercritical CO2 can replace existing and often polluting organic solvents in many fields of application and prevents the release of harmful products into the environment. For example, manufacturers currently use large quantities of water for dyeing textiles, which must be retreated after use because it has been polluted by pigments. Dyeing processes using supercritical CO2 allow textiles to be dyed without the release of chemicals. Rolls of fabric are placed in an autoclave, a sort of large pressure cooker designed to withstand high pressures, which pressurizes and heats the CO2 to its critical state. Once dissolved in the supercritical fluid, the pigment permeates to the core of the rolls of fabric, even those measuring two meters in diameter! The CO2 is then restored to normal atmospheric pressure and the dye is deposited on the fabric while the pure gas returns into the atmosphere or, better still, is recycled for another process.

But, watch out! We are often criticized for releasing CO2 into the atmosphere and thus contributing to global warming. This is not true: we use CO2 that has already been generated by an industry. We therefore don’t actually produce any and don’t increase the amount of CO2 in the atmosphere.

Does supercritical water also have specific characteristics?

JF: Supercritical water can be used for destroying hazardous, toxic or corrosive waste in several industries. Supercritical H2O is a very powerful oxidizing environment in which organic molecules are rapidly degraded. This fluid is also used in biorefinery: it gasifies or liquefies plant residues, sawdust or cereal straw to transform them into liquid biofuel, or methane and hydrogen gases which can be used to generate power. These solutions are still in the research stage, but have potential large-scale applications in the power industry.

Are supercritical fluids used on an industrial scale?

JF: Supercritical CO2 is not an oddity found only in laboratories! It has become an industrial process used in many fields. A French company called Diam Bouchage, for example, uses supercritical CO2 to extract trichloroanisole, the molecule responsible for cork taint in wine. It is a real commercial success!

Nevertheless, this remains a relatively young field of research that only developed in the 1990s. The scope for progress in the area remains vast! The editorial committee of the Journal of Supercritical Fluids, of which I am a member, sees the development of new applications every year.

 

Earth

Will the earth stop rotating after August 1st?

By Natacha Gondran, researcher at Mines Saint-Étienne, and Aurélien Boutaud.
The original version of this article (in French) was published in The Conversation.

[divider style=”normal” top=”20″ bottom=”20″]

[dropcap]I[/dropcap]t has become an annual summer tradition, much like France’s Music Festival or the Tour de France. Every August, right when French people are focused on enjoying their vacation, an alarming story begins to spread through the news: it’s Earth Overshoot Day!

From this fateful date through to the end of the year, humanity will be living at nature’s expense as it runs up an ecological debt. Just imagine vacationers on the beach or at their campsite discovering, through the magic of mobile networks, the news of this imminent breakdown.

Will the earth cease to rotate after August 1st? The answer is… No. There is no need to panic (well, not quite yet.) Once again, this year, the Earth will continue to rotate even after Earth Overshoot Day has come and gone. In the meantime, let’s take a closer look at how this date is calculated and how much it is worth from a scientific standpoint.

Is the ecological footprint serious?

Earth Overshoot Day is calculated based on the results of the “ecological footprint”, an indicator invented in the early 1990s by two researchers from the University of British Columbia in Vancouver. Mathis Wackernagel and William Rees sought to develop a synoptic tool that would measure the toll of human activity on the biosphere. They then thought of estimating the surface area of the land and the sea that would be required to meet humanity’s needs.

More specifically, the ecological footprint measures two things: on the one hand, the biologically productive surface area required to produce certain renewable resources (food, textile fibers and other biomass); on the other, the surface area that should be available to sequester certain pollutants in the biosphere.

In the early 2000s the concept proved extremely successful, with a vast number of research articles published on the subject, which contributed to making the calculation of the ecological footprint more robust and detailed.

Today, based on hundreds of statistical data entries, the NGO Global Footprint Network estimates humanity’s ecological footprint at approximately 2.7 hectares per capita. However, this global average conceals huge disparities: while an American’s ecological footprint exceeds 8 hectares, that of an Afghan is less than 1 hectare.

Overconsumption of resources

It goes without saying: the Earth’s biologically productive surfaces are finite. This is what makes the comparison between humanity’s ecological footprint and the planet’s biocapacity so relevant. This biocapacity represents approximately 12 billion hectares (of forests, cultivated fields, pasture land and fishing areas), or an average of 1.7 hectares per capita in 2012.

The comparison between ecological footprint and biocapacity therefore results in this undeniable fact: each year, humanity consumes more services from the biosphere that it can regenerate. In fact, it would take one and a half planets to sustainably provide for humanity’s needs. In other words, by August, humanity has already consumed the equivalent of the world’s biocapacity for one world.

These calculations are what led to the famous Earth Overshoot Day.

Legitimate Criticism

Of course, the ecological footprint is not immune to criticism. One criticism is that it focuses its analysis on the living portion of natural capital only and fails to include numerous issues, such as the pressure on mineral resources and chemical and nuclear pollution.

The accounting system for the ecological footprint is also very anthropocentric: biocapacity is estimated based on the principle that natural surfaces are at humanity’s complete disposal, ignoring the threats that human exploitation of ecosystems can pose for biodiversity.

Yet most criticism is aimed at the way the ecological footprint of fossil fuels is calculated. In fact, those who designed the ecological footprint based the concept on the observation that fossil fuels were a sort of “canned” photosynthetic energy–since they resulted from the transformation of organic matter that decomposed millions of years ago. The combustion of this matter therefore amounts to transferring carbon of organic origin into the atmosphere. In theory, this carbon could be sequestered in the biosphere… If only the biological carbon sinks were sufficient.

Therefore, what the ecological footprint measures is in fact a “phantom surface” of the biosphere that would be required to sequester the carbon that is accumulating in the atmosphere and causing the climate change we experience. This methodological discovery makes it possible to transfer tons of CO₂ into “sequestration surfaces”, which can then be added to the “production surfaces”.

While this is a clever principle, it poses two problems: first, almost the entire deficit observed by the ecological footprint is linked to the combustion of fossil fuels; and second, the choice of the coefficient between tons of CO₂ and sequestration surfaces is questionable, since several different hypotheses can produce significantly different results..

Is the ecological deficit underestimated?

Most of this criticism was anticipated by the designers of the ecological footprint.

Based on the principle that “everything simple is false, everything complex is unusable” (Paul Valéry), they opted for methodological choices that would produce aggregated results that could be understood by the average citizen. However, it should be noted that, for the most part, these choices were made to ensure that the ecological deficit was not overestimated. Therefore, a more rigorous or more exhaustive calculation would result in the increase of the observed deficit… And thus, an even earlier “celebration” of Earth Overshoot Day.

Finally, it is worth noting that this observation of an ecological overshoot is now widely confirmed by another scientific community which has, for the past ten years, worked in more detail on the “planetary boundaries” concept.

This work revealed nine areas of concern which represent ecological thresholds beyond which the conditions of life on Earth could no longer be guaranteed, since we would be leaving the stable state that has characterized the planet’s ecosystem for 10,000 years.

For three of these issues, it appears the limits have already been exceeded: the species’ extinction rate, and the balance of the biogeochemical cycle of nitrogen and that of phosphorus. We are also dangerously close to the thresholds in the areas of climate change and the use of land surface. In addition, we cannot rule out the possibility of new causes for concern arising in the future.

Earth Overshoot Day is therefore worthy of our attention, since it reminds us of this inescapable reality: we are exceeding several of our planet’s ecological limits. Humanity must take this reality more seriously. Otherwise, the Earth might someday continue to rotate… but without us.

 

[divider style=”normal” top=”20″ bottom=”20″]

Aurélien Boutaud and Natacha Gondran co-authored « L’empreinte écologique » (éditions La Découverte, 2018).

aneurysm

A digital twin of the aorta to prevent aneurysm rupture

15,000 Europeans die each year from rupture of an aneurysm in the aorta. Stéphane Avril and his team at Mines Saint-Étienne are working to better prevent this. To do so, they develop a digital twin of the artery of a patient with an aneurysm. This 3D model makes it possible to simulate the evolution of an aneurysm over time, and better predict the effect of a surgically-implanted prosthesis. Stéphane Avril talks to us about this biomechanics research project and reviews the causes for this pathology along with the current state of knowledge on aneurysms.

 

Your research focuses on the pathologies of the aorta and aneurysm rupture in particular. Could you explain how this occurs?   

Stéphane Avril

Stéphane Avril: The aorta is the largest artery in our body. It leaves the heart and distributes blood to the arms and brain, goes back down to supply blood to the intestines and then divides in two to supply blood to the legs. The wall of the aorta is a little bit like our skin. It is composed of practically the same proteins and the tissues are very similar. It therefore becomes looser as we age. This phenomenon may be accelerated by other factors such as tobacco or alcohol. It is an irreversible process that results in an enlarged diameter of the artery. When there is significant dilation, it is called an aneurysm. This is the most common pathology of the aorta. The aneurysm can rupture, which is often lethal for the individual. In Europe, some 15,000 people die each year from a ruptured aneurysm.

Can the appearance of an aneurysm be predicted?

SA: No, it’s very difficult to predict where and when an aneurysm will appear. Certain factors are morphological. For example, some aneurysms result from the malformation of an aortic valve: 1 % of the population has only two of the three leaflets that make up this part of the heart. As a result, the blood is pumped irregularly, which leads to a microinjury on the wall of the aorta, making it more prone to damage. One out of two individuals with this malformation develops an aneurysm, usually between the ages of 40 and 60. There are also genetic factors that lead to aneurysms earlier in life, between the ages 20 and 40. Then there are the effects of ageing, which make populations over 60 more likely to develop this pathology. It is complicated to determine which factors predominate in relation to one another. Especially since if at 30 or 40 an individual is declared healthy and then starts smoking, which will affect the evolution of the aorta.

If aneurysms cannot be predicted, can they be treated?

SA: In biology, extensive basic research has been conducted on the aortic system. This has allowed us to understand a lot about what causes aneurysms and how they evolve. Although specialists cannot predict an aneurysm’s appearance, they can say why the pathology appeared in a certain location instead of another, for example. For patients who already have an aneurysm, this also means that we know how to identify the risks related to the evolution of the pathology. However, no medication exists yet. Current solutions rely rather on surgery to implant a prosthesis or an endoprosthesis — a stent covered with fabric — to limit pressure on the damaged wall of the artery. Our work carried out with the Sainbiose joint research unit [run by INSERM, Mines Saint-Étienne and Université Jean Monnet], focused on gathering everything that is known so far about the aorta and aneurysms in order to propose digital models.

What is the purpose of these digital models?

SA: The model should be seen as a 3D digital twin of the patient’s aorta. We can perform calculations on it. For example, we study how the artery evolves naturally, whether or not there is a high risk of aneurysm rupture, and if so, where exactly in the aorta. The model can also be used to analyze the effect of a prosthesis on the aneurysm. We can determine whether or not surgery will really be effective and help the surgeon choose the best type of prosthesis. This use of the model to assist with surgery led to the creation of a startup, Predisurge, in May 2017. Practitioners are already using it to predict the effect of an operation and calculate the risks.

Read on IMTech: Biomechanics serving healthcare

How do you go about building this twin of the aorta?  

SA: The first data we use comes from imaging. Patients undergo CAT scans and MRIs. The MRIs give us information about blood flow because we can have 10 to 20 photos of the same area over the duration of a cardiac cycle. This provides us with information about how the aorta compresses and expands with each heart beat. Based on this dynamic, our algorithms can trace the geometry of the aorta. By combining this data with pressure measurements, we can deduce the parameters that control the mechanical behavior of the wall, especially elasticity. We then relate this to the composition of elastin, collagen and the smooth muscle cell ratio of the wall. This gives us a very precise idea about all the parts of the patient’s aorta and its behavior.

Are the digital twins intended for all patients?

SA: That’s one of the biggest challenges. We would like to have a digital twin for each patient as this would allow us to provide personalized medicine on a large scale. This is not yet the case today. For now, we are working with groups of volunteer patients who are monitored every year as part of a clinical study run by the Saint-Étienne University hospital. Our digital models are combined with analyses by doctors, allowing us to validate these models and talk to professionals about what they would like to be able to find using the digital twin of the aorta. We know that as of today, not all patients can benefit from this tool. Analyzing the data collected, building the 3D model, setting the right biological properties for each patient… all this is too time-consuming for wide-scale implementation. At the same time, what we are trying to do is identify the groups of patients who would most benefit from this twin. Is it patients who have aneurysms caused by genetic factors? For which age groups can we have the greatest impact? We also want to move towards automation to make the tool available to more patients.

How can the digital twin tool be used on a large scale?  

SA: The idea would be to include many more patients in our validation phase to collect more data. With a large volume of data, it is easier to move towards artificial intelligence to automate processing. To do so, we have to monitor large cohorts of patients in our studies. This means we would have to shift to a platform incorporating doctors, surgeons and researchers, along with imaging device manufacturers, since this is where the data comes from. This would help create a dialogue between all the various stakeholders and show professionals how modeling the aorta can have a real impact. We already have partnerships with other IMT network schools: Télécom SudParis and Télécom Physique Strasbourg. We are working together to improve the state of the art in image processing techniques. We are now trying to include imaging professionals. In order to scale up the tool, we must also expand the scope of the project. We are striving to do just that.

Around this topic on I’MTech