OMNI

OMNI: transferring social sciences and humanities to the digital society

I’MTech is dedicating a series of articles to success stories from research partnerships supported by the Télécom & Société Numérique Carnot Institute (TSN), to which IMT Atlantique belongs.

[divider style=”normal” top=”20″ bottom=”20″]

Belles histoires, Bouton, CarnotTechnology transfer also exists in social sciences and the humanities! The OMNI platform in Brittany proves this by placing its research activities at the service of organizations. Attached to the Breton scientific interest group called M@rsouin (which IMT Atlantique manages), it brings together researchers and professionals to study the impact of digital technology on society. The relevance of the structure’s approach has earned it a place within the “technology platform” offering of the Télécom & Société Numérique Carnot Institute (see insert at the end of the article). Nicolas Jullien, a researcher in Digital Economy at IMT Atlantique and Manager of OMNI, tells us more about the way in which organizations and researchers collaborate on topics at the interface between digital technology and society.

 

What is the role of the OMNI platform?

Nicolas Jullien: Structurally, OMNI is attached to the scientific interest group called M@rsouin, comprised of four universities and graduate schools in Brittany and, recently, three universities in Pays de la Loire*. For the past 15 years, this network has served the regional objective of having a research and study system on ICT, the internet and, more generally, what is today referred to as digital technology. OMNI is the research network’s tool for proposing studies on the impact of digital technology on society. The platform brings together practitioners and researchers and analyzes major questions that public or private organizations may have. It then sets up programs to collect and evaluate information to answer these questions. According to the needs, we can carry out questionnaire surveys – quantitative studies – or interview surveys – which are more qualitative. We also guarantee the confidentiality of responses, which is obviously important in the context of the GDPR. It is first and foremost a guarantee of neutrality between the player who wishes to collect information and the responding actors.

So is OMNI a platform for making connections and structuring research?

NJ: Yes. In fact, OMNI has existed for as long as M@rsouin, and corresponds to the part just before the research phase itself. If an organization has questions about digital technology and its impact and wants to work with the researchers at M@rsouin to collect and analyze information to provide answers, it goes through OMNI. We help establish the problem and express or even identify needs. We then investigate whether there is a real interest for research on the issue. If this is the case, we mobilize researchers at M@rsouin to define the questions and the most suitable protocol for the collection of information, and we carry out the collection and analysis.

What scientific skills can you count on?

NJ: M@rsouin has more than 200 researchers in social sciences and humanities. Topics of study range from e-government to e-education via social inclusion, employment, consumption, economic models, operation of organizations and work. The disciplines are highly varied and allow us to have a very comprehensive approach to the impact of digital technology on an organization, population or territory… We have researchers in education sciences, ergonomics, cognitive or social psychology, political science and, of course, economists and sociologists. But we also have disciplines which would perhaps be less expected by the general public, but which are equally important in the study of digital technology and its impacts. These include geography, urban planning, management sciences and legal expertise, which has been closely involved since the development of wide-scale awareness of the importance of personal data.

The connection between digital technology and geography may seem surprising. What is a geographer’s contribution, for example, to the question of digital technology?

NJ: One of the questions raised by digital technology is that of access to online resources. Geographers are specifically interested in the relationship between people and their resources and territory. Incorporating geography allows us to study the link between territory and the consumption of digital resources, or even to more radically question the pertinence of physical territory in studies on internet influence. It is also a discipline that allows us to examine certain factors favoring innovation. Can we innovate everywhere in France? What influence does an urban or rural territory have on innovation? These are questions asked in particular by chambers of commerce and industry, regional authorities or organizations such as FrenchTech.

Why do these organizations come to see you? What are they looking for in a partnership with a scientific interest group?

NJ: I would say that these partners are looking for a new perspective. They want new questions or a specialist point of view or expert assessment in complex areas. By working with researchers, they are forced to define their problem clearly and not necessarily seek answers straight away. We are able to give them the breathing space they need. But we can only do so if our researchers can make proposals and be involved in partners’ problems. We propose services, but are not a consultancy agency: our goal remains to offer the added value of research.

Can you give an example of a partnership?

NJ: In 2010 we began a partnership with SystemGIE, a company which acts as an intermediary between large businesses and small suppliers. It manages the insertion of these suppliers in the purchasing or production procedures of large clients. It is a fairly tricky positioning: it is necessary to understand the strategy of suppliers and large companies and the tools and processes to put in place… We supported SystemGIE in the definition of its atypical economic model. It is a matter of applied research because we try to understand where the value lies and the role of digital technology in structuring these operators. This is an example of a partnership with a company. But our biggest partner remains the Brittany Regional Council. We have just finished a survey with it on craftspeople. The following questions were asked: how do craftspeople use digital technology? How does their online presence affect their activity?

How does the Carnot label help OMNI?

NJ: First and foremost it is a recognition of our expertise and relevance for organizations. It also provides better visibility at a national institutional level, allowing us to further partnerships with public organizations across France, as well as providing better visibility among private actors. This will allow us to develop new, nationwide partnerships with companies on the subject of the digitization of society and the industry of the future.

* The members of M@rsouin are: Université de Bretagne Occidentale, Université de Rennes 1, Université de Rennes 2, Université de Bretagne Sud, IMT Atlantique, ENSAI, ESPE de Bretagne, Sciences Po Rennes, Université d’Angers, Université du Mans, Université de Nantes.

 

[divider style=”normal” top=”20″ bottom=”20″]

A guarantee of excellence in partnership-based research since 2006

The Télécom & Société Numérique Carnot Institute (TSN) has been partnering with companies since 2006 to research developments in digital innovations. With over 1,700 researchers and 50 technology platforms, it offers cutting-edge research aimed at meeting the complex technological challenges posed by digital, energy and industrial transitions currently underway in in the French manufacturing industry. It focuses on the following topics: industry of the future, connected objects and networks, sustainable cities, transport, health and safety.

The institute encompasses Télécom ParisTech, IMT Atlantique, Télécom SudParis, Institut Mines-Télécom Business School, Eurecom, Télécom Physique Strasbourg and Télécom Saint-Étienne, École Polytechnique (Lix and CMAP laboratories), Strate École de Design and Femto Engineering.

[divider style=”normal” top=”20″ bottom=”20″]

Guillaume Duc

Télécom Paris | #connectedcar #cybersecurity

Guillaume Duc has been an associate professor since 2009 at Télécom Paris in the laboratory for communication and processing of information (LTCI). He holds an engineering degree in telecommunications (2004) and a PhD in computer science (2007) from Télécom Bretagne (now IMT Atlantique). His research topics focus on the interactions between the hardware components required to execute applications (processor, memory, devices…) and the (cyber)security, from vulnerabilities introduced by the hardware to new hardware mechanisms that can be developed to increase the security of systems. He is also in charge of the research chair “Connected Cars and CyberSecurity”.

[toggle title=”Find all his articles on I’MTech” state=”open”]

[/toggle]

GDPR, RGPD

GDPR: towards values and policies

On May 25th, the GDPR came into effect. This new regulation requires administrations and companies in the 27 EU countries to comply with the law on the protection of personal data. Since its creation in 2013, the IMT Research Chair Values and Policies of Personal Information (CVPIP) aims to help businesses, citizens and public authorities in their reflections on the collection, use and the sharing of personal information. In this article, Claire Levallois-Barth, coordinator of the Chair, and Ivan Meseguer, co-founder, return to the geopolitical and economic context in which the GDPR is part and the obstacles that remain to its effective implementation.

[divider style=”dotted” top=”20″ bottom=”20″]

The original version of this article was published on the VPIP Chair website. This Chair brings together researchers from Télécom ParisTech, Télécom SudParis and Institut Mines Télécom Business School, and is supported by the Fondation Mines-Télécom.

[divider style=”dotted” top=”20″ bottom=”20″]

We were all expecting it.

The General Data Protection Regulation (GDPR)[1] came into force on May 25, 2018. This milestone gave rise to numerous reactions and events on the part of companies and institutions. As the Belgian Data Protection Authority put it, there is undoubtedly “a new wind coming, not a hurricane!” – which seems to have blown all the way across the Atlantic Ocean, as The Washington Post pointed out the creation of a “de-facto global standard that gives Americans new protections and the nation’s technology companies new headaches”.[2]

Not only companies are having such “headaches”; EU Member States are also facing them as they are required to implement the regulation. France,[3] Austria, Denmark, Germany, Ireland, Italy, the Netherlands, Poland, the United Kingdom and Sweden have already updated their national general law in order to align it with the GDPR; but to this day, Belgium, the Czech Republic, Finland, Greece, Hungary and Spain are still submitting draft implementation acts.

And this is despite the provisions and timeline of the bill having been officially laid down as early as May 4, 2016.

The same actually goes for French authorities, as some of them have also asked for extra time. Indeed, shortly before GDPR took effect, local authorities notified they weren’t ready for the Regulation, even though they had been aware of the deadline since 2016, just like everyone else.

Sixty French senators further threatened to refer the matter to the Constitutional Council, and then actually did, requesting a derogation period.

In schools and universities, GDPR is getting increasingly significant, even critical, to ensure both children’s and teachers’ privacy.

The issue of social uses and practices being conditioned as early as in primary school has been studied by the Chair Values and Policies of Personal Information (CVPIP) for many years now, and is well exemplified by major use cases such as the rise of smart toys and the obvious and increasing involvement of U.S. tech giants in the education sector.

As if that wasn’t enough, the geographical and economic context of the GDPR is now also an issue. Indeed, if nothing is done to clarify the situation, GDPR credibility might soon be questioned by two major problems:

  • U.S. non-compliance with the EU-U.S. Privacy Shield agreement, which was especially exposed by the Civil Liberties Committee (LIBE) of the European Parliament;[4]
  • The signing into law on March 23, 2018 – i.e. precisely before GDPR enforcement – of the Clarifying Lawful Overseas Use of Data (CLOUD Act) by Donald Trump.

The CLOUD Act unambiguously authorises U.S. authorities to access user data stored outside the United States by U.S. companies. At first glance, this isn’t sending a positive and reassuring message as to the U.S.’s readiness to simply comply with European rules when it comes to personal data.

Besides, we obviously should not forget the Cambridge Analytica scandal, which led to multiple hearings of Mark Zuckerberg by astounded U.S. and EU institutions, despite Facebook having announced its compliance with GDPR through an update of its forms.

None Of Your Business (Noyb), the non-profit organisation founded by Austrian lawyer Max Schrems, filed four complaints against tech giants, including Facebook, over non-compliance with the notion of consent. These complaints reveal how hard it is to protect the EU model in such a global and digital economy.[5]

Such truly European model, which is related neither to surveillance capitalism nor to dictatorial surveillance, is based on compliance with the values shared by our Member States in their common pact. We should refer to Article 2 of the Treaty on European Union for as long as needed:

The Union is founded on the values of respect for human dignity, freedom, democracy, equality, the rule of law and respect for human rights, including the rights of persons belonging to minorities. These values are common to the Member States in a society in which pluralism, non-discrimination, tolerance, justice, solidarity and equality between women and men prevail”.

Furthermore, Article 7 of the Charter of Fundamental Rights of the European Union clearly and explicitly provides that “everyone has the right to respect for his or her private and family life, home and communications”.

Such core values are not only reflected by GDPR, but by the whole body of legislation under construction it is part of, which includes:

  • The Draft ePrivacy Regulation, which aims to extend the scope of current Directive 2002/58/EC to over-the-top (OTT) services such as WhatsApp and Skype as well as to metadata; [6]
  • The draft Regulation on the free flow of non-personal data,[7] which has generated heated debates over the definitions of “non-personal data” and “common data spaces” (personal and non-personal data)[8] and which, according to European MP Anna Maria Corazza Bildt, aims to establish the free flow of data as the fifth freedom in the EU’s single market.<[9]

Besides, the framework on cybersecurity is currently being reviewed in order to implement a proper EU policy that respects citizens’privacy and personal data as well as EU values. 2019 will undoubtedly be the year of the cyberAct.[10]

A proper European model, respectful of EU values, is therefore under construction.

It is already inspiring and giving food for thought to other countries and regions of the world, including the United States, land of the largest tech giants.

In California, the U.S.’s most populous state, no less than 629,000 people signed the petition that led Californian MPs to pass the California Consumer Privacy Act on June 28, 2018. [11]

The Act, which takes effect on January 1, 2020, broadens the definition of “personal information” by including tracking data and login details, and contains provisions similar to the GDPR’s on:

  • Individuals’ ability to control their personal information, with new rights regarding transparency, access, portability, objection, deletion and choice of the collected information;
  • The protection of minors, with the prohibition from selling or disclosing the personal information of a consumer under 16 years of age, “unless affirmatively authorised”;
  • The violation of personal data, with the right to institute a civil action against a company in the event of a data theft caused by the absence of appropriate security procedures.

California, the nation’s leading state in privacy protection, is setting the scene for major changes in the way companies interact with their customers. The Act, the strictest ever passed in the U.S., has inevitably been criticized by the biggest Silicon Valley tech companies, who are already asking for a relaxation of the legislation.

Let us end with an amusing twist by giving the last word to the former American president (yet not the least among them), Barack Obama. In a speech addressing the people of Europe, in Hanover, Germany, in 2016, he proclaimed:

Europeans, like Americans, cherish your privacy. And many are skeptical about governments collecting and sharing information, for good reason. That skepticism is healthy.  Germans remember their history of government surveillance – so do Americans, by the way, particularly those who were fighting on behalf of civil rights.

So it’s part of our democracies to want to make sure our governments are accountable”[12]

Also read on I’MTech

digital metamorphosis

Sociology and philosophy combine to offer a better understanding of the digital metamorphosis

Intellectual, professional, political, personal, private: every aspect of our lives is affected by technological developments that are transforming our society in a profound way. These changes raise specific challenges that require a connection between the empirical approaches of sociology and philosophical questioning. Pierre-Antoine Chardel, a philosopher, social science researcher and specialist in ethics at Institut Mines-Telecom Business School, answers our questions about socio-philosophy and the opportunities this field opens up for an analysis of the digital metamorphosis.

 

In what ways do the current issues surrounding the deployment of technology require an analytical approach combining the social sciences and philosophy?

The major philosophical questions about a society’s development, and specifically the technological aspect, must consider the social, economic and cultural contexts in which the technology exists. For example, artificial intelligence technologies do not raise the same issues when used for chatbots as they do when used for personal assistance robots; and they are not perceived the same way in different countries and cultural contexts. The sociological approach, and the field studies it involves, urges us to closely consider the contexts which are constantly redefining human-machine interactions.

Sociology is complemented by a philosophical approach that helps to challenge the meaning of our political, social and industrial realities in light of the lifestyles they produce (in both the medium and long-term). And the deployment of technology raises fundamental philosophical issues: to what extent does it support the development of new horizons of meaning and how does it enrich (or weaken) community life? More generally, what kind future do we hope our technological societies will have?

For several years now, I have been exploring socio-philosophical perspectives with other researchers in the context of the LASCO IdeaLab, in collaboration with the Institut Interdisciplinaire d’Anthropologie du Contemporain, a CNRS/EHESS joint research unit, or with the Chair on Values and Policies of Personal Information. These perspectives offer a way to anchor issues related to changes in subjectivation processes, public spheres and social imagery in very specific sociological environments. The idea is not to see the technology simply as an object in itself, but to endeavor to place it in the social, economic, political and industrial environments where it is being deployed. Our goal is to remain attentive to what is happening in our rapidly-changing world, which continues to create significant tension and contradictions. At the same time, in society we see a high demand for reflexivity and a need to distance ourselves from technology

What form can this demand for meaning take, and how can it be described from a socio-philosophical point of view?

This demand for meaning can be perceived through phenomena which reveal people’s difficulties in finding their way in a world that is increasingly structured with the constant pressure of time. In this regard, the burn-out phenomenon is very revealing, pointing to a world that creates patterns of constant mobilization. This phenomenon is only intensified through digital interactions when they are not viewed with a critical approach. We therefore witness situations of mental burn-out. Just like natural resources, human and cognitive resources are not inexhaustible. This observation coincides with a desire to distance ourselves from technology, which leads to the following questions: How can we care for individuals in the midst of the multitude of interactions that are made possible in our modern-day societies? How can we support more reasonable and peaceful technological practices?

You say “digital metamorphosis” when referring to the deployment of technology in our societies. What exactly does this idea of metamorphosis mean?

We are currently witnessing processes of metamorphosis at work in our societies. From a philosophical point of view, the term metamorphosis refers to the idea that individually and collectively we are working to build who we are, as we accept to be constantly reinvented, by using a creative approach in developing our identities. Today, we no longer develop our subjectivity based on stable criteria, but based on opportunities which have increased with digitization. This has increased the different ways we exist in the world, our ways of presenting ourselves, by using online networks, for example. Think of all the different identities that we can have on social networks and the subjectivation processes that these processes create. On the other hand, the hyper-memory that makes digital technology possible tends to freeze the representation we have of a person. The question is, can people be reduced to the data they produce? Are we reducible to our digital traces? What do they really say about us?

What other major phenomena accompany this digital metamorphosis?

Another phenomenon produced by the digital metamorphosis is that of transparency. As Michel Foucault said, the modern man has become a “confessing animal”. We can easily pursue this same reflection today in considering how we now live in societies where all of our activities could potentially be tracked. This transparency of every moment raises very significant questions from a socio-philosophical and ethical point of view related to the right to be forgotten and the need for secrecy.

But does secrecy still mean anything today? Here we realize that certain categories of thought must be called into question in order to change the established conceptual coordinates: what does the right to privacy and to secrecy really mean when all of our technology makes most of our activities transparent, with our voluntary participation? This area involves a whole host of socio-philosophical questions. Another important issue is the need to emphasize that we are still very ethno-centric in our understanding of our technological environments. We must therefore accept the challenge of coming into close contact with different cultural contexts in order to open up our perspectives of interpretation, with the goal of decentering our perception of the problems as much as possible.

How can the viewpoints of different cultures enrich the way we see the “digital metamorphosis”?

The contexts in which different cultures have taken ownership of technology varies greatly based on the country’s history. For example, in former East Germany, privacy issues are addressed very differently than they are in North American, which has not suffered from totalitarian regimes. On a completely different note, the perception we have of robotics in Western culture is very different from the prevalent perception in Japan. These concepts are not foreign to Buddhist and Shinto traditions, since they believe that objects can have a soul. The way people relate to innovations in the field of robotics is therefore very different depending on the cultural context and it involves unique ethical issues.

In this sense, a major principle in our seminar devoted to “Present day socio-philosophy” is to emphasize that the complexities of today’s world push us to question the way we can philosophically understand them while resisting the temptation to establish them within a system. Finally, most of the crises we face (whether economic, political or ecological) force us to think about the epistemological and methodological issues surrounding our theoretical practices in order to question the foundations of these processes and their presuppositions. We therefore wish to emphasize that, more than ever, philosophy must be conducted in close proximity to human affairs, by creating a wealth of exchanges with the social sciences (socio-anthropology, socio-history and socio-economy in particular). This theoretical and practical initiative corresponds to a strong demand from students and young researchers as well as many in the corporate world.

 

MeMAD

Putting sound and images into words

Projets européens H2020Can videos be turned into text? MeMAD, an H2020 European project launched in January 2018 and set to last three years, aims to do precisely that. While such an initiative may seem out of step with today’s world, given the increasingly important role video content plays in our lives, in reality, it addresses a pressing issue of our time. MeMAD strives to develop technology that would make it possible to fully describe all aspects of a video, from how people move, to background music, dialogues or how objects move in the background etc. The goal: create a multitude of metadata for a video file so that it is easier to search for in databases. Benoît Huet, an artificial intelligence technologies researcher at EURECOM — one of the project partners — talks to us in greater detail about MeMAD’s objectives and the scientific challenges facing the project. 

 

Software that automatically describes or subtitles videos already exists. Why devote a Europe-wide project such as MeMAD to this topic?

Benoît Huet: It’s true that existing applications already address some aspects of what we are trying to do. But they are limited in terms of usefulness and effectiveness. When it comes to creating a written transcript of the dialogue in videos, for example, automatic software can make mistakes. If you want correct subtitles you have to rely on human labor which has a high cost. A lot of audiovisual documents aren’t subtitled because it’s too expensive to have them made. Our aim with MeMAD is, first of all, to go beyond the current state of the art for automatically transcribing dialogue and, furthermore, to create comprehensive technology that can also automatically describe scenes, atmospheres, sounds, and name actors, different types of shots etc. Our goal is to describe all audiovisual content in a precise way.

And why is such a high degree of accuracy important?

BH: First of all, in its current form audiovisual content is difficult to access for certain communities, such as the blind or visually impaired and individuals who are deaf or hard of hearing. By providing a written description of a scene’s atmosphere and different sounds, we could enhance the experience for individuals with hearing problems as they watch a film or documentary. For the visually impaired, the written descriptions could be read aloud. There is also tremendous potential for applications for creators of multimedia content or journalists, since fully describing videos and podcasts in writing would make it easier to search for them in document archives. Descriptions may also be of interest for anyone who wants to know a little bit about a film before watching it.”

The National Audiovisual Institute (INA), one of the partner’s projects, possesses extensive documentary and film archives. Can you explain exactly how you are working with this data?

BH: At EURECOM we have two teams involved in the MeMAD project who are working on these documents. The first team focuses on extracting information. It uses technology based on deep neural networks to recognize emotions, analyze how objects and people move, the soundtrack etc.  In short, everything that creates the overall atmosphere. The scientific work focuses especially on developing deep neural network architectures to extract the relevant metadata from the information contained in the scene. The INA also provides us with concrete situations and the experience of their archivists to help us understand which metadata is of value in order to search within the documents. And at the same time, the second team focuses on knowledge engineering. This means that they are working on creating well-structured descriptions, indexes and everything required to make it easier for the final user to retrieve the information.

What makes the project challenging from a scientific perspective?

BH: What’s difficult is proposing something comprehensive and generic at the same time. Today our approaches are complete in terms of quality and relevance of descriptions. But we always use a certain type of data. For example, we know how to train the technology to recognize all existing car models, regardless of the angle of the image, lighting used in the scene etc. But, if a new car model comes out tomorrow, we won’t be able to recognize it, even if it is right in front of us. The same problem exists for political figures or celebrities. Our aim is to create technology that works not only based on documentaries and films of the past, but that will also able to understand and recognize prominent figures in documentaries of the future. This ability to progressively increase knowledge represents a major challenge.

What research have you drawn on to help meet this scientific challenge?

BH: We have over 20 years of experience in research on audiovisual content to draw on. This justifies our participation in the MeMAD project. For example, we have already worked on creating automatic summaries of videos. I recently worked with IBM Watson to automatically create a trailer for a Hollywood film. I am also involved in the NexGenTV project along with Raphaël Troncy, another contributor to the MeMAD project. With NexGenTV, we’ve demonstrated how to automatically recognize the individuals on screen at a given moment. All of this has provided us with potential answers and approaches to meet the objectives of MeMAD.

Also read on I’MTech

cost of cyber-attacks

Cybersecurity: high costs for companies

Hervé Debar, Télécom SudParis – Institut Mines-Télécom, Université Paris-Saclay

[divider style=”normal” top=”20″ bottom=”20″]

[dropcap]T[/dropcap]he world of cybersecurity has changed drastically over the past 20 years. In the 1980s, information systems security was a rather confidential field with a focus on technical excellence. The notion of financial gain was more or less absent from attackers’ motivations. It was in the early 2000s that the first security products started to be marketed: firewalls, identity or event management systems, detection sensors etc. At the time these products were clearly identified, as was their cost, which was high at times. Almost twenty years later, things have changed: attacks are now a source of financial gain for attackers.

What is the cost of an attack?

Today, financial motivations are usually behind attacks. An attacker’s goal is to obtain money from victims, either directly or indirectly, whether through requests for ransom (ransomware), or denial of service. Spam was one of the first ways to earn money by selling illegal or counterfeit products. Since then, attacks on digital currencies such as bitcoin have now become quite popular. Attacks on telephone systems are also extremely lucrative in an age where smartphones and computer technology are ubiquitous.

It extremely difficult to assess the cost of cyber-attacks due to the wide range of approaches used. Information from two different sources can however provide insight to estimate the loss incurred: that of service providers and that of the scientific community.

On the service provider side, a report by American service provider Verizon entitled, “Data Breach Investigation Report 2017” measures the number of records compromised by an attacker during an attack but does not convert this information into monetary value. Meanwhile, IBM and Ponemon indicate an average cost of $141 US per record compromised, while specifying that this cost is subject to significant variations depending on country, industrial sector etc. And a report published by Accenture during the same period assesses the average annual cost of cybersecurity incidents as approximately $11 million US (for 254 companies).

How much money do the attackers earn?

In 2008, American researchers tried to assess the earnings of a spam network operator. The goal was to determine the extent to which an unsolicited email could lead to a purchase. By analyzing half a billion spam messages sent by two networks of infected machines (botnet), the authors estimated that the hackers who managed the networks earned $3 million US. However, the net profit is very low. Additional studies have shown the impact of cyber attacks on the cost of shares of corporate victims. This cybersecurity economics topic has also been developed as part of a Workshop on the Economics of Information Security.

The figures may appear to be high, but as is traditionally the case for Internet services, attackers benefit from a network effect in which the cost of adding a victim is low, but the cost of creating and installing the attack is very high. In the case studied in 2008, the emails were sent using the Zeus robots network. Since this network steals computing resources from the compromised machines, the initial cost of the attack was also very low.

In short, the cost of cyberattacks has been a topic of study for many years now. Both academic and commercial studies exist. Nevertheless, it remains difficult to determine the exact cost of cyber-attacks. It is also worth noting that it has historically been greatly overestimated.

The high costs of defending against attacks

Unfortunately, defending against attacks is also very expensive. While an attacker only has to find and exploit one vulnerability, those in charge of defending against attacks have to manage all possible vulnerabilities. Furthermore, there is an ever-growing number of vulnerabilities discovered every year in information systems. Additional vulnerabilities are regularly introduced by the implementation of new services and products, sometimes unbeknownst to the administrators responsible for a company network. One such case is the “bring your own device” (BYOD) model. By authorizing employees to work on their own equipment (smartphones, personal computers) this model destroys the  perimeter defense that existed a few years ago. Far from saving companies money, it introduces an additional dose of vulnerability.

The cost of security tools remains high as well. Firewalls or detection sensors can cost as much as €100,000 and the cost of a monitoring platform to manage all this security equipment can cost up to ten times as much. Furthermore, monitoring must be carried out by professionals and there is a shortage of these skills in the labor market. Overall, the deployment of protection and detection solutions amounts to millions of euros every year.

Moreover, it is also difficult to determine the effectiveness of detection centers intended to prevent attacks because we do not know the precise number of failed attacks. A number of initiatives, such as Information Security Indicators, are however attempting to answer this question. One thing is certain: every day information systems can be compromised or made unavailable, given the number of attacks that are continually carried out on networks. The spread of the malicious code Wannacry proved how brutal certain attacks can be and how hard it can be to predict their development.

Unfortunately, the only effective defense is often updating vulnerable systems once flaws have been discovered. This creates few consequences for a work station, but is more difficult on servers, and can be extremely difficult in high-constraint environments (critical servers, industrial protocols etc.) These maintenance operations always have a hidden cost, linked to the unavailability of the hardware that must be updated. And there are also limitations to this strategy. Certain updates are impossible to implement, as is the case with Skype, which requires a major software update and leads to uncertainty in its status. Other updates can be extremely expensive, such as those used to correct the Spectre and Meltdown vulnerabilities that affect the microprocessors of most computers. Intel has now stopped patching the vulnerability in older processors.

A delicate decision

The problem of security comes down to a rather traditional risk analysis, in which an organization must decide which risks to protect itself against, how subject it is to risks, and which ones it should insure itself against.

In terms of protection, it is clear that certain filtering tools such as firewalls are imperative in order to preserve what is  left of the perimeter. Other subjects are more controversial, such as Netflix’s abandoning of anti-virus and decision to rely instead on massive data analysis to detect cyber-attacks.

It is very difficult to assess how subject a company is to risks since they are often the result of technological advances in vulnerabilities and attacks rather than a conscious decision made by the company. Attacks through denial of service, like the one carried out in 2016 using the Mirai malware, for example, are increasingly powerful and therefore difficult to counter.

The insurance strategy for cyber-risk is even more complicated, since premiums are extremely difficult to calculate. Cyber-risk is often systematic since a flaw can affect a large number of clients. Unlike the risk of natural catastrophe, which is limited to a region, allowing insurance companies to spread the risk out over its various clients and calculate a future risk based on risk history, computer vulnerabilities are often widespread, as can be seen in recent examples such as the Meltdown, Spectre and Krack flaws. Almost all processors and WiFi terminals are vulnerable.

Another aspect that makes it difficult to estimate risks is that vulnerabilities are often latent, which means that only a small community is aware of them. The flaw used by the Wannacry malware had already been identified by NSA, the American Security Agency (under the name EternalBlue). The attackers who used the flaw learned about its existence from documents leaked from the American government agency itself.

How can security be improved? The basics are still fragile

Faced with a growing number of vulnerabilities and problems to solve, it seems essential to reconsider the way internet services are built, developed and operated. In other industrial sectors the answer has been to develop standards and certify products in relation to these standards. This means guaranteeing smooth operations, often in a statistical manner. The aeronautics industry, for example, certifies its aircraft and pilots and has very strong results in terms of safety. In a more closely-related sector, telephone operators in the 1970s guaranteed excellent network reliability with a risk of service disruption lower than 0.0001 %.

This approach also exists in the internet sector with certifications based on common criteria. These certifications often result from military or defense needs. They are therefore expensive and take a long time to obtain, which is often incompatible with the speed-to-market required for internet services. Furthermore, standards that could be used for these certifications are often insufficient or poorly suited for civil settings. Solutions have been proposed to address this problem, such as the CSPN certification defined by the ANSSI (French National Information Systems Security Agency). However, the scope of the CSPN remains limited.

It is also worth noting the consistent positioning of computer languages in favor of quick, easy production of computer code. In the 1970s languages that chose facility over rigor came into favor. These languages may be the source of significant vulnerabilities. The recent PHP case is one such example. Used by millions of websites, it was one of the major causes of SQL injection vulnerabilities.

The cost of cybersecurity, a question no longer asked

In strictly financial terms, cybersecurity is a cost center that directly impacts a company or administration’s operations. It is important to note that choosing not to protect an organization against attacks amounts to attracting attacks since it makes the organization an easy target. As is often the case, it is therefore worthwhile to provide a reminder about the rules of computer hygiene.

The cost of computer flaws is likely to increase significantly in the years ahead. And more generally, the cost of repairing these flaws will rise even more dramatically. We know that the point at which an error is identified in a computer code greatly affects how expensive it is to repair it: the earlier it is detected, the less damage is done. It is therefore imperative to improve development processes in order to prevent programming errors from quickly becoming remote vulnerabilities.

IT tools are also being improved. Stronger languages are being developed. These include new languages like RUST and GO, and older languages that have come back into fashion, such as SCHEME. They represent stronger alternatives to the languages currently taught, without going back to languages as complicated as ADA for example. It is essential that teaching practices progress in order to factor in these new languages.

The Conversation Wasted time, stolen or lost data…We have been slow to recognize the loss of productivity caused by cyber-attacks. It must be acknowledged that cybersecurity now contributes to a business’s performance. Investing in effective IT tools has become an absolute necessity.

 

Hervé Debar, Head of Department Networks and Telecommunications services, Télécom SudParis – Institut Mines-Télécom, Université Paris-Saclay

The original version of this article (in French) was published on The Conversation.

Also read on I’MTech:

[one_half]

[/one_half][one_half_last]

[/one_half_last]

field hospitals, hôpitaux de campagne

Emergency logistics for field hospitals

European field hospitals, or temporary medical care stations, are standing by and ready to be deployed throughout the world in the event of a major disaster. The HOPICAMP project, of which IMT Mines Alès is one of the partners, works to improve the logistics of these temporary medical centers and develop telemedicine tools and training for health care workers. Their objective is to ensure the emergency medical response is as efficient as possible. The logistical tools developed in the context of this project were successfully tested in Algeria on 14-18 April during the European exercise EU AL SEISMEEX.

 

European field hospitals, or temporary medical care stations, are standing by and ready to be deployed throughout the world in the event of a major disaster. The HOPICAMP project, of which IMT Mines Alès is one of the partners, works to improve the logistics of these temporary medical centers and develop telemedicine tools and training for health care workers. Their objective is to ensure the emergency medical response is as efficient as possible. The logistical tools developed in the context of this project were successfully tested in Algeria on 14-18 April during the European exercise EU AL SEISMEEX.

Earthquakes, fires, epidemics… Whether the disasters are of natural or human causes, European member states are ready to send resources to Africa, Asia or Oceania to help the affected populations. Field hospitals, temporary and mobile stations where the wounded can receive care, represent a key element in responding to emergencies.

After careful analysis, we realized that the field hospitals could be improved, particularly in terms of research and development,” explains Gilles Dusserre, a researcher at IMT Mines Alès, who works in the area of risk science and emergency logistics. This multidisciplinary field, at the crossroads between information technology, communications, health and computer science, is aimed at improving the understanding of the consequences of natural disasters on humans and the environment. “In the context of the HOPICAMP project, funded by the Single Interministerial Fund (FUI) and conducted in partnership with the University of Nîmes, the SDIS30 and companies CRISE, BEWEIS, H4D and UTILIS, we are working to improve field hospitals, particularly in terms of logistics,” the researcher explains.

Traceability sensors, virtual reality and telemedicine to the rescue of field hospitals

When a field hospital is not being deployed, all the tents and medical equipment are stored in crates, which makes it difficult to ensure the traceability of critical equipment. For example, an electrosurgical unit must never be separated from its specific power cable due to risks of not being able to correctly perform surgical operations in the field. “The logistics operational staff are all working on improving how these items are addressed, identified and updated, whether the hospital is deployed or on standby,” Gilles Dusserre explains. The consortium worked in collaboration with BEWEIS to develop an IT tool for identification and updates as well as for pairing RFID tags, sensors that help ensure the traceability of the equipment.

In addition, once the hospital is deployed, pharmacists, doctors, engineers and logisticians must work in perfect coordination in emergency situations. But how can they be trained in these specific conditions when their workplace has not yet been deployed? “At IMT Mines Alès, we decided to design a serious game and use virtual reality to help train these individuals from very different professions for emergency medicine,” explains Gilles Dusserre. Thanks to virtual reality, the staff can learn to adapt to this unique workplace, in which operating theaters and treatment rooms are right next to living quarters and rest areas in tents spanning several hundred square meters. The serious game, which is being developed, is complementary to the virtual reality technology. It allows each participant to identify the different processes involved in all the professions to ensure optimal coordination during a crisis situation.

Finally, how can the follow-up of patients be ensured when the field hospitals are only present in the affected countries for a limited time period? “During the Ebola epidemic, only a few laboratories in the world were able to identify the disease and offer certain treatments. Telemedicine is therefore essential here,” Gilles Dusserre explains. In addition to proposing specific treatments to certain laboratories, telemedicine also allows a doctor to remotely follow-up with patients, even when the doctor has left the affected area.  “Thanks to the company H4D, we were able to develop a kind of autonomous portable booth that allows us to observe around fifteen laboratory values using sensors and cameras.” These devices remain at the location, providing the local population with access to dermatological, ophthalmological and cardiological examinations through local clinics.

Field-tested solutions

We work with the Fire brigade Association of the Gard region, the French Army and Doctors Without Borders. We believe that all of the work we have done on feedback from the field, logistics, telemedicine and training has been greatly appreciated,” says Gilles Dusserre.

In addition to being accepted by end users, certain tools have been successfully deployed during simulations. “Our traceability solutions for equipment developed in the framework of the HOPICAMP project were tested during the EU AL SEISMEEX Europe-Algeria earthquake deployment exercises in the resuscitation room,” the researcher explains. The exercise, which took place from April 14-18 in the context of a European project funded by DG ECHO, the Directorate-General for European Civil Protection and Humanitarian Aid Operations, simulated the provision of care for victims of a fictional earthquake in Bouira, Algeria. 1,000 individuals were deployed for the 7 participating countries: Algeria, Tunisia, Italy, Portugal, Spain, Poland and France. The field hospitals from the participating countries were brought together to cooperate and ensure the interoperability of the implemented systems, which can only be tested during actual deployment.

La logistique de l’urgence au service des hôpitaux de campagne

The EU-AL SEISMEEX team gathered in front of the facilities for the exercise.

 

Gilles Dusserre is truly proud to have led a project that contributed to the success of this European simulation exercise. “As a researcher, it is nice to be able to imagine and design a project, see an initial prototype and follow it as it is tested and then deployed in an exercise in a foreign country. I am very proud to see what we designed becoming a reality.”

SSE

Social and solidarity economy in light of corporate reform

Mélissa Boudes, Institut Mines-Telecom Business School
This article was co-authored by Quentin Renoul, entrepreneur.

[divider style=”normal” top=”20″ bottom=”20″]

[dropcap]T[/dropcap]he draft PACTE law (Action Plan for Business Growth and Transformation), presented by President Emmanuel Macron’s government, seeks to change the way corporate stakeholders (entrepreneurs, funders, managers, workers, customers, etc.) see the company’s role in society. The goal is to rethink what companies could be—or should be—in light of the societal, economic, political and ecological changes that have been happening in recent years.

More importantly, this corporate reform seems to announce the end of the dichotomy between “traditional” companies and social and solidarity economy (SSE) companies.  Four years after the law on social and solidarity economy, what can we learn by comparing the two laws? What do they show us about new business models? What kind of picture do they paint of the firms of the future?

Social and Solidarity Economy Companies

Behind the SSE label, there lies a wide variety of companies with old and diverse historical foundations.  They were united in 1980 under France’s National Liaison Committee for Mutual, Cooperative and Associative Activities, which was governed by a charter with seven principles:

  • One person, one voice (in the General Assembly), regardless of the individual’s level of participation in the company’s capital;
  • Voluntary membership and responsibility;
  • Dual capacity: the company belongs to those who produce its value;
  • Equality and freedom;
  • Limited profit: surpluses are reintegrated into the project;
  • Empowerment: research and experimentation;
  • People-oriented economic activities.

These principles are supposed to be guaranteed by the articles of incorporation adopted by these organizations–associations, cooperatives, mutual societies and their foundations–but also by the practices and tools they employ. However, under the effect of competition and the decline in public support, SSE companies have been pushed to adopt the same practices and management tools as for-profit companies: management by objectives, financial reporting, etc.

These tools and practices run counter to SSE practices, so much so that adopting them makes it more difficult for all stakeholders (employees of these organizations, consumers of the products and services that they produce, etc.) to distinguish between SSE companies and traditional companies. In fact, though the public is seldom aware of this, some major brands are actually SSE companies: Maif (mutual fund), Intersport (retail cooperative), Crédit Mutuel (cooperative bank), Magasins U (retail cooperative), the Up–Chèque déjeuner group (workers’ cooperative).

SSE Law of 2014: expanded to include commercial companies

In the context of this complex landscape, new companies have emerged in recent years called “social” companies. These companies pursue a social purpose, but with the same legal form as “traditional” commercial companies (public limited company, simplified joint-stock company, etc.). For example, DreamAct (a platform that promotes “responsible” consumption with its city guide and e-shop) or La Ruche qui dit oui (English: The Food Assembly, which facilitates direct sales between producers and consumers) have opted for the simplified joint-stock company status.

Despite some protests, the SSE Law, passed by French Parliament on 31 July 2014 allowed these companies, which are both commercial and social, to be included in the SSE family. Still, certain conditions apply: they must comply with SSE principles, in other words, pursue an aim other than sharing profits, have democratic governance, create indivisible reserves, and use the majority of their profits to maintain or develop the company.

Originally designed to bring wider recognition to the SSE, the law has in fact further exacerbated tensions between social companies and SSE companies, further blurring the lines of company classifications.

2018 Corporate Reform: integrating CSR criteria

The current draft corporate reform is based on criticism of the financialization of companies, which was particularly brought to light following the 2008 financial crisis. It seeks to respond to new social and environmental expectations for companies, at a time when public authorities and civil society appear powerless when it comes to financial issues. This reform seems to encourage companies to adopt these criteria of corporate social responsibility (CSR).

The recommendations of the Notat-Sénard report on “the collective interest company”, which were added to the draft PACTE law on 9 March 2018, in some ways provide an answer to company stakeholders’ growing search for meaning. Measures such as considering social and environmental issues, reflecting on companies’ “raison d’être”, and increasing the representation of employees in the board of directors should help to heal the rift between companies and society as a whole by aligning their objectives with common goals that benefit everyone.

The end of dichotomies

These two reforms offer senior managers and entrepreneurs a wider range of options.  The old dichotomies that emerged with the industrial revolution (decision-making bosses/decision-implementing employees, lucrative company/charitable association) are slowly fading away to make way for more diverse, hybrid organizations.

Theoretically, we could map the entrepreneurial landscape by dividing it into two different areas. This offers a clearer understanding of the logic behind the legislation from 2014 and 2018 and its potential impacts on companies. The first area is governance, or the distribution of decision-making power.

The second area is the business model, or the methods for mobilizing resources, using them and distributing the value the company creates.

The governance area can be subdivided into two sections. On the one hand there are traditional commercial companies, with a shareholder governance in which the decision-making power is by and large concentrated in the hands of investors. On the other hand, there are SSE companies, as they define themselves in their charter from 1980, with a democratic governance in which the decision-making power belongs to the participating members (employees, volunteers, consumers, producers). The business aspect can also be subdivided into two models: the profit model, focused on maximizing profit, and the limited profit model, in which surpluses are reintegrated into the project or are distributed between participating members.

While the historical distinction between “traditional” companies and SSE companies consisted in a shareholder governance with a for-profit business model versus democratic governance with a limited-profit business model, this rift has greatly diminished in recent years due to changes in practices (CSR, social companies) which have now been recognized and even encouraged by the SSE law and the draft PACTE law.

In 2014, the SSE law included commercial companies which are considered social companies in the SSE category, by encouraging them to adopt democratic governance and limit their profit-making by reintegrating their surpluses into the project and creating indivisible reserves.  The draft PACTE law seeks to encourage companies to open up their governance to include employees in decision-making. It also calls for business models to be expanded to integrate social and environmental issues.

A move toward hybrid companies, “creators of meaning”?

These two reforms invite us to change the way we see the company. They aim to support and encourage changes in entrepreneurial behavior by providing tools to create new governance and business models for companies. They encourage the creation and development of what could be considered “hybrid” companies in comparison with the historical models.

Many companies have already combined the two traditional views of the company to accomplish their goals. One prime example is companies for integration through economic activity.  These groups strive to reintegrate individuals who are farthest removed from employment through training and appropriate jobs. They often combine different legal statuses to bring together a profitable business activity and social purpose. Although the wages of the individuals who are being reintegrated are co-financed by the public authorities, the integration companies must find opportunities and generate enough income to create jobs and develop the business project and make it sustainable. Réseau Cocagne, for example, proposes organic food baskets produced by individuals integrating the workplace. It is an association under the French law of 1901 and it created a private company limited by shares which is supported by an endowment fund to improve its access to funding. In 2016, the group had 4,320 employees integrating the workforce, 815 permanent employees, 1,800 volunteers and administrators and 20,500 consumer members.

Could what we are witnessing be the development of what François Rousseau called “creators of meaning”? Companies “on a quest to regain the meaning behind their actions and the social calling motivating their project”.

Having reestablished their roots in society, these companies need specific management tools to be developed, “meaning-management tools”, which enable them to reach the economic, social and environmental goals they have set. The SSE law and draft PACTE law pave the way for developing such tools and practices. Now it is up to entrepreneurs, managers, workers and funders to create, develop, and make them meaningful.

Mélissa Boudes, Associate Professor of Management, Institut Mines-Telecom Business School

The original version of this article (in French) was published on The Conversation.

Diatabase: France’s first diabetes database

The M4P consortium has received the go-ahead from Bpifrance to implement its project to build a clinical diabetes database called Diatabase. The consortium headed by Altran also includes French stakeholders in diabetes care, the companies OpenHealth and Ant’inno, as well as IMT and CEA List. The consortium aims to improve care, study and research for this disease that affects 3.7 million people in France. The M4P project was approved by the Directorate General for Enterprise of the French Ministry for Economy and Finance as part of the Investissements d’Avenir (Investments for the Future) program-National fund for the digital society.

 

In France, 3.7 million people are being treated for type 1 or type 2 diabetes, which represents 5% of the population. The prevalence of these diseases continues to increase and their complications are a major concern for public health and economic sustainability. Modern health systems produce huge quantities of health data about the disease, both through community practices and hospitals, and additional data is generated outside these systems, by the patients themselves or through connected objects. The potential for using this massive data from multiple sources is far-reaching, especially for advancing knowledge of diabetes, promoting health and well-being of diabetes sufferers and improving care (identifying risk factors, diagnostic support, monitoring the efficacy of treatment etc.)

Supported by a consortium of multidisciplinary experts, and backed by the Directorate General for Enterprise of the French Ministry for Economy and Finance, the M4P project aims to build and make available commercially a multi-source diabetes database, “Diatabase,” comprising data from hospitals and community practices, research centers, connected objects and cross-referenced with the medical-economic databases of the SNDS (National Health Data System).

The project seeks to “improve the lives and care of diabetes sufferers through improved knowledge and sharing of information between various hospital healthcare providers as well as between expert centers and community practices,” says Dr Charpentier, President of CERITD (Center for Study and Research for Improvement of the Treatment of Diabetes) one of the initiators of the project. “With M4P and Diatabase, we aim to promote consistency in providing assistance for an individual within a context of monitoring by interdisciplinary teams and to increase professionals’ knowledge to provide better care,” adds Professor Brigitte Delemer, Head of the Diabetology department at the University Hospital of Reims and Vice-President of the CARéDIAB network which is also involved in the M4P project.

“The analysis of massive volumes of ‘real-life’ data involves overcoming technical hurdles, in particular in terms of interoperability, and will further the understanding of the disease while providing health authorities and manufacturers with tools to monitor the drugs and medical devices used,” says Dr Jean-Yves Robin, Managing Director of OpenHealth, a company specializing in health data analysis which is part of the M4P project.

The M4P is supported by an expert consortium bringing together associations of healthcare professionals active in diabetes, such as CERITD, the CARéDIAB network, the Nutritoring company; private and public organizations specializing in digital technologies with Altran, data analysis with OpenHealth and ANT’inno, associated with CEA List for the semantic analysis and use of unstructured data, and finally, Institut Mines Telecom, which is providing its Teralab platform, a secure accelerator for research projects in AI and data, and its disruptive techniques for preventing data leaks, misuse and falsification based on data watermarking technology as well as methods for processing natural language to reveal new correlations and facilitate prediction and prevention.

“Thanks to the complementary nature of the expertise brought together through this project,  its business-focused approach and consideration of professional practices, M4P is the first example in France of structuring health data and making it available commercially in the interest of the public good,” says Fabrice Mariaud, Director of Programs, Research and Expertise Centers in France for Altran.

The consortium has given itself a period of three years to build Diatabase and to make it available for use, to serve healthcare professionals and patients.

breast cancer

Superpixels for enhanced detection of breast cancer

Deep learning methods are increasingly used to aid medical diagnosis. At IMT Atlantique, Pierre-Henri Conze is taking part in this drive to use artificial intelligence algorithms for healthcare by focusing on breast cancer. His work combines superpixels defined on mammograms and deep neural networks to obtain better detection rates for tumor areas, thereby limiting false positives.

 

In France, one out of eight women will develop breast cancer in their lives. Every year 50,000 new cases are recorded in the country, a figure which has been on the rise for several years. At the same time, the survival rate has also continued to rise. The five-year survival rate after being diagnosed with breast cancer increased from 80% in 1993 to 87% in 2010. These results can be correlated with a rise in awareness campaigns and screening for breast tumors. Nevertheless, large-scale screening programs still have room for improvement. One of the major limitations of this sort of screening is that it results in far too many false positives, meaning patients must come back for additional testing. This sometimes leads to needless treatment with serious consequences: mastectomy, radiotherapy, chemotherapy etc. “Out of 1,000 participants in a screening, 100 are called back, while on average only 5 are actually affected by breast cancer,” explains Pierre-Henri Conze, a researcher in image processing. The work he carries out at IMT Atlantique in collaboration with Mines ParisTech strives to reduce this number of false positives by using new analysis algorithms for breast X-rays.

The principle is becoming better-known: artificial intelligence tools are used to automatically identify tumors. Computer-aided detection helps radiologists and doctors by identifying masses, one of the main clinical signs of breast cancer. This improves diagnosis and saves time since multiple readings do not then have to be carried out systematically. But it all comes down to the details: how exactly can the software tools be made effective enough to help doctors? Pierre-Henri Conze sums up the issue: “For each pixel of a mammography, we have to be able to tell the doctor if it belongs to a healthy area or a pathological area, and with what degree of certainty.”

But there is a problem: algorithmic processing of each pixel is time-consuming. Pixels are also subject to interference during capture: this is “noise,” like when a picture is taken at night and certain pixels are whited out. This makes it difficult to determine whether an altered pixel is located in a pathological zone or not. The researcher therefore relies on “superpixels.” These are homogenous areas of the image obtained by grouping together neighboring pixels. “By using superpixels, we limit errors related to the noise in the image, while keeping the areas small enough to limit any possible overlapping between healthy and tumor areas,” explains the researcher.

In order to successfully classify the superpixels, the scientists rely on descriptors: information associated with each superpixel to describe it. “The easiest descriptor to imagine is light intensity,” says Pierre-Henri Conze. To generate this information, he uses a certain type of deep neural network, called a “convolutional” neural network. What is their advantage compared to other neural networks? They determine by themselves which descriptors are the most relevant in order to classify superpixels using public mammography databases. Combining superpixels with convolutional neural networks produces especially successful results. “For forms as irregular as tumor masses, this combination strives to identify the boundaries of tumors more effectively than with traditional techniques based on machine learning,” says the researcher.

This research is in line with work by the SePEMeD joint laboratory between IMT Atlantique, the LaTIM laboratory, and the Medecom company, whose focus areas include improving medical data mining. It builds directly on research carried out on recognizing tumors in the liver. “With breast tumors, it was a bit more complicated though, because there are two X-rays per breast, taken at different angles and the body is distorted in each view,” points out Pierre-Henri Conze. One of the challenges was to correlate the two images while accounting for distortions related to the exam. Now the researcher plans to continue his research by adding a new level of complexity: variation over time. His goal is to be able to identify the appearance of masses by comparing different exams performed on the same patient several months apart. The challenge is still the same: to detect malignant tumors as early as possible in order to further improve survival rates for breast cancer patients.

Also read on I’MTech