Papaya

PAPAYA: a European project for a confidential data analysis platform

Projets européens H2020EURECOM is coordinating the three-year European project, PAPAYA, launched on May 1st. Its mission: enable cloud services to process encrypted or anonymized data without having to access the unencrypted data. Melek Önen, a researcher specialized in applied cryptography, is leading this project. In this interview she provides more details on the objectives of this H2020 project.

 

What is the objective of the H2020 Papaya project?

Melek Önen: Small and medium-sized companies do not always have the means to internally process large amounts of data that is often personal or confidential. They therefore use cloud services to simplify the task, but in so doing they lose control over their data. Our mission with the PAPAYA project (which stands for PlAtform for PrivAcY-preserving data Analytics) is to succeed in using data processing and classification methods while keeping the data encrypted and/or anonymized. This would offer companies greater security and confidentiality when they use third party cloud services, since these services could no longer access the unencrypted data. This has become a major issue since the European General Data Protection Regulation (GDPR) has come into effect.

What is your main challenge in this project?

MÖ: Today, when we encrypt data the traditional way, it is protected in a randomized manner—in other words, using a method that lacks transparency. It is impossible to carry out operations on data in this state. In 2009, cryptography researcher Craig Gentry proposed a unique method called fully homomorphic encryption. Using this method, several operations can be carried out on encrypted data. The problem is, processing data this way is not very efficient in terms of memory usage and the processes required. The majority of our work will involve designing variants of the data processing algorithms that will be compatible with data protected by homomorphic encryption.

Can you explain how you design variants of data processing algorithms?

MÖ: For example, a neural network contains both linear operations that are easily managed with appropriate encryption methods as well as non-linear operations.  We do not know how to process encrypted data using non-linear operations. Yet the network’s accuracy depends on these non-linear operations, so we cannot do without them. What we must do in this situation is approximate these operations, which are actually functions, by using other linear functions with similar behavior. The more effective this approximation, the more accurate the neural network, and we can therefore process the encrypted data.

What use cases do you plan to work on?

MÖ: We have two different use cases. The first is medical data encryption. This situation affects many hospitals that have patients’ data but are not large enough to have their own internal data processing services. They therefore use cloud services. The second case involves web analytics and it could be useful for the tourism sector. Data collected by smartphone users could be very useful in this sector that analyzes the way tourists move from one place of interest to another. For both cases, we imagine several progressive scenarios. First, for one data owner who has all the users’ unencrypted data that he encrypts with the same key and transfers to the cloud. Next, several owners with several keys. Finally, we consider data that comes directly from users.

Who else is working on this project with you?

MÖ: PAPAYA brings together 6 partners including EURECOM, which is coordinating the action. The companies involved in assisting us with the use cases and in designing this new platform are Atos, IBM, Haifa Research Lab, Orange Labs, and MediaClinics—an SME that makes sensors for monitoring patients in hospitals. In terms of academic partners, we are working with Karlstad University in Sweden. We will work together for the entire three-year project.

quèsaco mécatronique, mechatronics

What is mechatronics?

Intelligent products can perceive their environment, communicate, process information and act accordingly… Is this science fiction?  No, it’s mechatronics! Every day, we come in contact with mechatronic systems, from reflex cameras to our cars’ braking systems. Beyond the technical characteristics of these devices, the term mechatronics also refers to the systemic and comprehensive nature of their design. Pierre Couturier, a researcher at IMT Mines Alès, answers our questions about the development of these complex multi-technology systems.

 

What is mechatronics?

Mechatronics is an interdisciplinary and collaborative approach for designing and producing multi-technology products. To design a mechatronic product, several different professions must work together to simultaneously solve electronic, IT and mechanical problems.

In addition, designing a mechatronic product means adopting a systemic approach and taking into account stakeholders’ needs for the product over its entire lifecycle, from design, creation, production, use, to dismantling.  The issues of recycling and disposing of the materials are also considered during the earliest stages of the design phase. Mechatronics brings very different professions together and this systemic vision creates a consensus among all the specialists involved.

 

What are the characteristics of a mechatronic product?

A mechatronic product can perceive its environment using sensors, process the information received and then communicate and react accordingly in or on this environment. Developing such capacities requires the integration of several technologies in synergy: mechanics, electronics, IT and automation. Ideally, a product is designed to self-run and self-correct based on how it is used. With this goal in mind, we use artificial intelligence technologies and different types of learning: supervised, non-supervised or reinforced learning.

 

What types of applications are mechatronic products used for?

Mechatronic products are used in many different fields: transport, mobility, robotics, industrial equipment, machine-tools, as well as in applications for the general public… Reflex cameras, which integrate mechanical aspects with mobile parts are one example of mechatronic products.

In the area of transport, we also encounter mechatronics on a daily basis, for example with the ABS braking assistance system that is integrated into most cars. This system detects when the wheels are slipping and releases the driver’s braking request to restore the wheel’s grip on the road.

At IMT Mines Alès, we are also conducting several mechatronic projects on health and disability, including a motorized wheel for an all-terrain wheelchair. The principle is to provide the wheelchair with electrical assistance proportional to how the individual pushes on the handrail.

 

What other types of health projects are you leading at IMT Mines Alès?

In the health sector, we have developed a device for measuring the pressure a shoe exerts on the foot for an orthopedic company from Lozère. This product is intended for individuals with diabetes who have a loss of sensation in their feet: they can sometimes injure themselves by wearing inappropriate shoes without feeling any pain. Using socks equipped with sensors placed at specific places, areas with excessive pressure can be identified. The data is then sent to a remote station which transfers the different pressure points to a 3D model. We can therefore infer what corrections need to be made to the shoe to ensure the individual’s comfort.

We have also developed a scooter for people with disabilities, featuring a retractable kickstand that is activated when the vehicle runs at a low speed, to prevent the rider from falling. Also, in the area of disability, we have worked on a system of controls for electric wheelchairs that involve both a touchpad with two pressure areas to move forward and backward and touch sensors activated by the head to move left or right.

 

What difficulties are sometimes encountered when developing complex mechatronic products?

The first difficulty is to get all the different professions to work together to design a product.  There are real human aspects to manage! The second technical difficulty is caused by the physical interactions between the product’s different components, which are not always predictable. At IMT Mines Alès, for example, we designed a machine for testing the resistance of a foam mattress. A roller moved across the entire length of the mattress to wear it out. However, the interaction between the foam and roller produced electrostatic phenomena that led to electric shocks. We had underestimated their significance… We therefore had to change the roller material to resolve this problem. Due to the complexity of these systems, we discovered physical interactions we had not expected during the design phase!

To avoid this type of problem, we conduct research in systems engineering to assess, verify and validate the principles behind the solution as soon as possible in the design phase, even before physically making any of the product’s components. The ideal solution would be to design a product using digital modeling and simulation, and then produce it without the prototype phase… But that’s not yet possible! In reality, due to the increasing complexity of mechatronic products, it is still necessary to develop a prototype to detect properties or behaviors that are difficult to assess through simulation.

 

Medcam

Medcam: a high-quality image for laparoscopy

300,000 laparoscopies are performed every year in France. At ten euros per minute spent in the operating room for procedures that last from one to ten hours, any time that can be saved is significant. Medcam helps save precious minutes by reducing the time required to clean the camera used. This makes it possible to schedule one additional patient per operating day. 

 

It was over a family dinner, which would conclude with making initial sketches, that the idea for Medcam first took shape. That day, Clément, an engineer in fluid mechanics, his sister, an expert in the medical sector, and his brother-in-law Yann, who teaches mechanical engineering, were talking about laparoscopy. This minimally invasive surgical procedure widely performed in digestive, urological and gynecological surgery, is based on inserting a camera in the abdomen. The problem is that condensation, accumulated smoke and blood and visceral fat projections are deposited on the lens and constantly degrade the image. Every ten to fifteen minutes, the surgeon must interrupt the procedure to extract the camera and clean it, which leads to a loss of concentration and wastes time when the camera is removed and reinserted.

One device, three benefits

The solution invented by the brand new SMICES company (Smart Medical Devices) in close collaboration with Dr. Joël Da Broi, a surgeon specialized in visceral and digestive surgery, makes it possible to automatically clean the camera during the procedure. The device, which can be adjusted to fit all camera models, does not in any way interfere with surgeons’ use of the camera or practices in the operating room. But it single-handedly solves three problems: it allows the surgeon to work more comfortably so he/she can remain concentrated on his or her work, it saves time and allows the surgeon to add a patient to the operating schedule.

The start of clinical evaluations after promising tests

Developed in collaboration with the mechatronics platform at IMT Mines Alès, the operational prototype only uses components that already exist in the operation room. Medcam has been successfully tested in real conditions on a cadaver at a university hospital center and will begin preclinical evaluations for CE marking in June 2018 so that it can be marketed in 2019. SMICES will be responsible for manufacturing and distributing Medcam and has set a clear objective: to become the market leader for healthcare institutions in France (300,000 laparoscopies per year) and Italy in its first year, in Europe (over 1 million laparoscopies per year) in four years’ time, and ultimately conquer the world (10 million laparoscopies per year).

mobility

Data-mobility or the art of modeling travel patterns

The major rail workers’ strike in France on the spring of 2018 transformed French travel habits, especially in the Ile-de-France region. Vincent Gauthier, a researcher at Télécom SudParis, is working to understand the travel patterns in this region and around the world using mobile data.

[divider style=”normal” top=”20″ bottom=”20″]

The original version of this article (in French) was published on the Télécom SudParis website.

[divider style=”normal” top=”20″ bottom=”20″]

The French have a saying that reflects the daily routine of millions of Parisians: “métro-boulot-dodo” (metro-work-sleep).  While this seems to be the universal experience for Il-de-France residents, individual variations exist. Some individuals only use public transport via one of the two major networks, RATP or SNCF, but others prefer driving. There are also those who change from the metro to the RER train, or leave their car part way and take a train. All of this information can be found through mobile data analysis. Vincent Gauthier, associate research professor at Télécom SudParis, has become a specialist in the area.

Using mobile networks to understand mobility

Determining someone’s travel itinerary based on the mobile data provided by their operator is not an easy task. “A telephone only transmits its GPS position to applications that request it, such as Waze,” Vincent Gauthier explains. “The only knowledge an operator can use to establish a person’s geographic location is which mobile base stations they were connected to during their travels.”

The French telephone network, which is shared between different operators including Orange, SFR and Bouygues, forms an irregular grid pattern (see Fig. 3). The different relay or base stations provide a network connection based on clearly defined zones. When a person leaves a zone, they automatically enter another one, and their telephone connects to the new corresponding base station. The size of these zones varies in each region. In the Ile-de-France region, a large number of base stations are concentrated and clustered together in Paris, but there are much fewer in the Seine-et-Marne region.

[one_half]

Fig. 1 : Méthode d’agrégation des réseaux de transport pour analyse fine du trajet emprunté.

Fig. 1 : Method used to aggregate the transport networks to closely analyze the route taken.

[/one_half][one_half_last]

Fig. 2 : Matrice origine-destination sur une journée en Île-de-France.

Fig. 2 : Origin-destination matrix for a day in the Ile-de-France region.

[/one_half_last]

Fig. 3 : Schéma de quadrillage des stations de base du réseau mobile.

Fig. 3 : Grid pattern for the mobile network base stations.

 

The information produced from these connections only allows origin-destination matrices that are more or less detailed to be established. As an expert in the graphical representation of large volumes of data (Fig. 2), Vincent Gauthier wants to take this analysis a step further: “How does a person travel? Why? Where does the person live? How many other people take the same route? Answering these questions could help us optimize mobility options.”

To reproduce the exact route an individual takes based on this non-specific information, he has worked on a new method with another researcher from Télécom SudParis, Mounim El Yacoubi (ARMEDIA team–EPH department).

From optimizing transportation to geodemographics

Mounim and I have patented a method for automatically processing routes, which allows us to determine what types of transport a person has taken during their journey,” Vincent Gauthier explains. Thanks to their “method for route estimation using mobile data”, the two researchers can superimpose the different transport networks over the information the operators receive from the base stations (Fig. 1). “To identify the most likely road or rail journey the users have taken based on their route, we must use a huge database including the locations of the base stations, train stations and the maps of the different transport networks.” They are currently working with Bouygues to develop route estimations in “near real time”.

In their work, the two researchers are drawing on previous socio-demographic studies they conducted in Milan and in Africa. “We participated in estimating population density in the Ivory Coast and Senegal,” explains Vincent Gauthier. “The goal was to provide socio-demographic data that was lacking in these countries, so that the United Nations could establish more reliable statistics.”

Vincent Gauthier’s work goes beyond simply modeling big data; his expertise leads us to rethink the geography of our regions: “By analyzing individuals’ routes and optimizing transport options accordingly, we could possibly divide the Ile-de-France region into more relevant sub-areas.”

 

BioMica

BioMICA platform: at the cutting edge of medical imaging

Belles histoires, Bouton, CarnotAmong the research platforms at Télécom SudParis, BioMICA has developed bio-imaging applications that have already been approved by the medical field. Airways, its 3D representation software, received funding from Télécom & Société Numérique Carnot Institute.

[divider style=”dotted” top=”20″ bottom=”20″]

The original version of this article (in French) was published on the Télécom SudParis website

[divider style=”dotted” top=”20″ bottom=”20″]

 

One of the recommendations included in the March 2017 France AI Strategy report was to put artificial intelligence to work to improve medical diagnosis. The BioMICA research platform (which stands for Bio-Medical Imaging & Clinical Applications) has made this goal its mission.

We aim to develop tools that can be used in the clinical setting,” says Catalin Fetita, professor at Télécom SudParis and director of the bio-medical imaging platform. “Our applied research focuses on computer-aided diagnosis involving medical and biological imaging,” he explains. As a specialist in image analysis and processing, Catalin Fetita offers the platform true expertise in the area of medical imaging, particularly in lung imaging.

AirWays, or another way of seeing lungs

AirWays is “image marker” software (like biomarkers in biology). Based on a sequence of lung images taken by a scanner, it extracts as much information as possible for clinicians to assist them in their diagnosis by offering a range of different visualization and classification options. “The quantitative aspect is very important, we do not only want to offer better visual quality,” Catalin Fetita explains. “We offer the possibility of obtaining improved measurements of morphological differences in several areas of the respiratory system at different moments in time. This help clinicians decide which treatment to choose.” In terms of quantified results, the software can detect 95% of stenosis cases, which is the narrowing of bronchial tubes that affects respiratory capacity.

AirWays software uses a graphic grid representation of bronchial tube surfaces after analyzing clinical images and then generates 3D images to view them both “inside and outside” (above, a view of the local bronchial diameter using color coding)This technique allows doctors to plan more effectively for endoscopies and operations that were previously performed by sight.

“For now, we have limited ourselves to the diagnosis-analysis aspect, but I would also like to develop a predictive aspect,” says the researcher. This perspective is what motivated Carnot TSN to help finance AirWays in December 2017. “This new budget will help us improve and optimize the software’s interface and increase its computing power to make it a true black box for automatic and synthetic processing,” explains Catalin Fetita, who also hopes to work towards commercializing the software.

A platform for medicine of the future

In addition to its many computer workstations for developing its medical software, the BioMICA platform features two laboratories for biological experimentation. One of the laboratories has a containment level of L1 (any biological agent that is non-pathogenic for humans) and the other is L2 (possible pathogen with low risk). Both will help advance the clinical studies in cellular bio-imaging.

In addition, Catalin Fetita and his team are preparing a virtual reality viewing station to provide a different perspective of the lung tissue analyzed by Airways. “Our platform works thanks to research partnerships and technological transfers,” he explains, “but we can also use it to provide services for clinical studies.”

 

OMNI

OMNI: transferring social sciences and humanities to the digital society

I’MTech is dedicating a series of articles to success stories from research partnerships supported by the Télécom & Société Numérique Carnot Institute (TSN), to which IMT Atlantique belongs.

[divider style=”normal” top=”20″ bottom=”20″]

Belles histoires, Bouton, CarnotTechnology transfer also exists in social sciences and the humanities! The OMNI platform in Brittany proves this by placing its research activities at the service of organizations. Attached to the Breton scientific interest group called M@rsouin (which IMT Atlantique manages), it brings together researchers and professionals to study the impact of digital technology on society. The relevance of the structure’s approach has earned it a place within the “technology platform” offering of the Télécom & Société Numérique Carnot Institute (see insert at the end of the article). Nicolas Jullien, a researcher in Digital Economy at IMT Atlantique and Manager of OMNI, tells us more about the way in which organizations and researchers collaborate on topics at the interface between digital technology and society.

 

What is the role of the OMNI platform?

Nicolas Jullien: Structurally, OMNI is attached to the scientific interest group called M@rsouin, comprised of four universities and graduate schools in Brittany and, recently, three universities in Pays de la Loire*. For the past 15 years, this network has served the regional objective of having a research and study system on ICT, the internet and, more generally, what is today referred to as digital technology. OMNI is the research network’s tool for proposing studies on the impact of digital technology on society. The platform brings together practitioners and researchers and analyzes major questions that public or private organizations may have. It then sets up programs to collect and evaluate information to answer these questions. According to the needs, we can carry out questionnaire surveys – quantitative studies – or interview surveys – which are more qualitative. We also guarantee the confidentiality of responses, which is obviously important in the context of the GDPR. It is first and foremost a guarantee of neutrality between the player who wishes to collect information and the responding actors.

So is OMNI a platform for making connections and structuring research?

NJ: Yes. In fact, OMNI has existed for as long as M@rsouin, and corresponds to the part just before the research phase itself. If an organization has questions about digital technology and its impact and wants to work with the researchers at M@rsouin to collect and analyze information to provide answers, it goes through OMNI. We help establish the problem and express or even identify needs. We then investigate whether there is a real interest for research on the issue. If this is the case, we mobilize researchers at M@rsouin to define the questions and the most suitable protocol for the collection of information, and we carry out the collection and analysis.

What scientific skills can you count on?

NJ: M@rsouin has more than 200 researchers in social sciences and humanities. Topics of study range from e-government to e-education via social inclusion, employment, consumption, economic models, operation of organizations and work. The disciplines are highly varied and allow us to have a very comprehensive approach to the impact of digital technology on an organization, population or territory… We have researchers in education sciences, ergonomics, cognitive or social psychology, political science and, of course, economists and sociologists. But we also have disciplines which would perhaps be less expected by the general public, but which are equally important in the study of digital technology and its impacts. These include geography, urban planning, management sciences and legal expertise, which has been closely involved since the development of wide-scale awareness of the importance of personal data.

The connection between digital technology and geography may seem surprising. What is a geographer’s contribution, for example, to the question of digital technology?

NJ: One of the questions raised by digital technology is that of access to online resources. Geographers are specifically interested in the relationship between people and their resources and territory. Incorporating geography allows us to study the link between territory and the consumption of digital resources, or even to more radically question the pertinence of physical territory in studies on internet influence. It is also a discipline that allows us to examine certain factors favoring innovation. Can we innovate everywhere in France? What influence does an urban or rural territory have on innovation? These are questions asked in particular by chambers of commerce and industry, regional authorities or organizations such as FrenchTech.

Why do these organizations come to see you? What are they looking for in a partnership with a scientific interest group?

NJ: I would say that these partners are looking for a new perspective. They want new questions or a specialist point of view or expert assessment in complex areas. By working with researchers, they are forced to define their problem clearly and not necessarily seek answers straight away. We are able to give them the breathing space they need. But we can only do so if our researchers can make proposals and be involved in partners’ problems. We propose services, but are not a consultancy agency: our goal remains to offer the added value of research.

Can you give an example of a partnership?

NJ: In 2010 we began a partnership with SystemGIE, a company which acts as an intermediary between large businesses and small suppliers. It manages the insertion of these suppliers in the purchasing or production procedures of large clients. It is a fairly tricky positioning: it is necessary to understand the strategy of suppliers and large companies and the tools and processes to put in place… We supported SystemGIE in the definition of its atypical economic model. It is a matter of applied research because we try to understand where the value lies and the role of digital technology in structuring these operators. This is an example of a partnership with a company. But our biggest partner remains the Brittany Regional Council. We have just finished a survey with it on craftspeople. The following questions were asked: how do craftspeople use digital technology? How does their online presence affect their activity?

How does the Carnot label help OMNI?

NJ: First and foremost it is a recognition of our expertise and relevance for organizations. It also provides better visibility at a national institutional level, allowing us to further partnerships with public organizations across France, as well as providing better visibility among private actors. This will allow us to develop new, nationwide partnerships with companies on the subject of the digitization of society and the industry of the future.

* The members of M@rsouin are: Université de Bretagne Occidentale, Université de Rennes 1, Université de Rennes 2, Université de Bretagne Sud, IMT Atlantique, ENSAI, ESPE de Bretagne, Sciences Po Rennes, Université d’Angers, Université du Mans, Université de Nantes.

 

[divider style=”normal” top=”20″ bottom=”20″]

A guarantee of excellence in partnership-based research since 2006

The Télécom & Société Numérique Carnot Institute (TSN) has been partnering with companies since 2006 to research developments in digital innovations. With over 1,700 researchers and 50 technology platforms, it offers cutting-edge research aimed at meeting the complex technological challenges posed by digital, energy and industrial transitions currently underway in in the French manufacturing industry. It focuses on the following topics: industry of the future, connected objects and networks, sustainable cities, transport, health and safety.

The institute encompasses Télécom ParisTech, IMT Atlantique, Télécom SudParis, Institut Mines-Télécom Business School, Eurecom, Télécom Physique Strasbourg and Télécom Saint-Étienne, École Polytechnique (Lix and CMAP laboratories), Strate École de Design and Femto Engineering.

[divider style=”normal” top=”20″ bottom=”20″]

GDPR, RGPD

GDPR: towards values and policies

On May 25th, the GDPR came into effect. This new regulation requires administrations and companies in the 27 EU countries to comply with the law on the protection of personal data. Since its creation in 2013, the IMT Research Chair Values and Policies of Personal Information (CVPIP) aims to help businesses, citizens and public authorities in their reflections on the collection, use and the sharing of personal information. In this article, Claire Levallois-Barth, coordinator of the Chair, and Ivan Meseguer, co-founder, return to the geopolitical and economic context in which the GDPR is part and the obstacles that remain to its effective implementation.

[divider style=”dotted” top=”20″ bottom=”20″]

The original version of this article was published on the VPIP Chair website. This Chair brings together researchers from Télécom ParisTech, Télécom SudParis and Institut Mines Télécom Business School, and is supported by the Fondation Mines-Télécom.

[divider style=”dotted” top=”20″ bottom=”20″]

We were all expecting it.

The General Data Protection Regulation (GDPR)[1] came into force on May 25, 2018. This milestone gave rise to numerous reactions and events on the part of companies and institutions. As the Belgian Data Protection Authority put it, there is undoubtedly “a new wind coming, not a hurricane!” – which seems to have blown all the way across the Atlantic Ocean, as The Washington Post pointed out the creation of a “de-facto global standard that gives Americans new protections and the nation’s technology companies new headaches”.[2]

Not only companies are having such “headaches”; EU Member States are also facing them as they are required to implement the regulation. France,[3] Austria, Denmark, Germany, Ireland, Italy, the Netherlands, Poland, the United Kingdom and Sweden have already updated their national general law in order to align it with the GDPR; but to this day, Belgium, the Czech Republic, Finland, Greece, Hungary and Spain are still submitting draft implementation acts.

And this is despite the provisions and timeline of the bill having been officially laid down as early as May 4, 2016.

The same actually goes for French authorities, as some of them have also asked for extra time. Indeed, shortly before GDPR took effect, local authorities notified they weren’t ready for the Regulation, even though they had been aware of the deadline since 2016, just like everyone else.

Sixty French senators further threatened to refer the matter to the Constitutional Council, and then actually did, requesting a derogation period.

In schools and universities, GDPR is getting increasingly significant, even critical, to ensure both children’s and teachers’ privacy.

The issue of social uses and practices being conditioned as early as in primary school has been studied by the Chair Values and Policies of Personal Information (CVPIP) for many years now, and is well exemplified by major use cases such as the rise of smart toys and the obvious and increasing involvement of U.S. tech giants in the education sector.

As if that wasn’t enough, the geographical and economic context of the GDPR is now also an issue. Indeed, if nothing is done to clarify the situation, GDPR credibility might soon be questioned by two major problems:

  • U.S. non-compliance with the EU-U.S. Privacy Shield agreement, which was especially exposed by the Civil Liberties Committee (LIBE) of the European Parliament;[4]
  • The signing into law on March 23, 2018 – i.e. precisely before GDPR enforcement – of the Clarifying Lawful Overseas Use of Data (CLOUD Act) by Donald Trump.

The CLOUD Act unambiguously authorises U.S. authorities to access user data stored outside the United States by U.S. companies. At first glance, this isn’t sending a positive and reassuring message as to the U.S.’s readiness to simply comply with European rules when it comes to personal data.

Besides, we obviously should not forget the Cambridge Analytica scandal, which led to multiple hearings of Mark Zuckerberg by astounded U.S. and EU institutions, despite Facebook having announced its compliance with GDPR through an update of its forms.

None Of Your Business (Noyb), the non-profit organisation founded by Austrian lawyer Max Schrems, filed four complaints against tech giants, including Facebook, over non-compliance with the notion of consent. These complaints reveal how hard it is to protect the EU model in such a global and digital economy.[5]

Such truly European model, which is related neither to surveillance capitalism nor to dictatorial surveillance, is based on compliance with the values shared by our Member States in their common pact. We should refer to Article 2 of the Treaty on European Union for as long as needed:

The Union is founded on the values of respect for human dignity, freedom, democracy, equality, the rule of law and respect for human rights, including the rights of persons belonging to minorities. These values are common to the Member States in a society in which pluralism, non-discrimination, tolerance, justice, solidarity and equality between women and men prevail”.

Furthermore, Article 7 of the Charter of Fundamental Rights of the European Union clearly and explicitly provides that “everyone has the right to respect for his or her private and family life, home and communications”.

Such core values are not only reflected by GDPR, but by the whole body of legislation under construction it is part of, which includes:

  • The Draft ePrivacy Regulation, which aims to extend the scope of current Directive 2002/58/EC to over-the-top (OTT) services such as WhatsApp and Skype as well as to metadata; [6]
  • The draft Regulation on the free flow of non-personal data,[7] which has generated heated debates over the definitions of “non-personal data” and “common data spaces” (personal and non-personal data)[8] and which, according to European MP Anna Maria Corazza Bildt, aims to establish the free flow of data as the fifth freedom in the EU’s single market.<[9]

Besides, the framework on cybersecurity is currently being reviewed in order to implement a proper EU policy that respects citizens’privacy and personal data as well as EU values. 2019 will undoubtedly be the year of the cyberAct.[10]

A proper European model, respectful of EU values, is therefore under construction.

It is already inspiring and giving food for thought to other countries and regions of the world, including the United States, land of the largest tech giants.

In California, the U.S.’s most populous state, no less than 629,000 people signed the petition that led Californian MPs to pass the California Consumer Privacy Act on June 28, 2018. [11]

The Act, which takes effect on January 1, 2020, broadens the definition of “personal information” by including tracking data and login details, and contains provisions similar to the GDPR’s on:

  • Individuals’ ability to control their personal information, with new rights regarding transparency, access, portability, objection, deletion and choice of the collected information;
  • The protection of minors, with the prohibition from selling or disclosing the personal information of a consumer under 16 years of age, “unless affirmatively authorised”;
  • The violation of personal data, with the right to institute a civil action against a company in the event of a data theft caused by the absence of appropriate security procedures.

California, the nation’s leading state in privacy protection, is setting the scene for major changes in the way companies interact with their customers. The Act, the strictest ever passed in the U.S., has inevitably been criticized by the biggest Silicon Valley tech companies, who are already asking for a relaxation of the legislation.

Let us end with an amusing twist by giving the last word to the former American president (yet not the least among them), Barack Obama. In a speech addressing the people of Europe, in Hanover, Germany, in 2016, he proclaimed:

Europeans, like Americans, cherish your privacy. And many are skeptical about governments collecting and sharing information, for good reason. That skepticism is healthy.  Germans remember their history of government surveillance – so do Americans, by the way, particularly those who were fighting on behalf of civil rights.

So it’s part of our democracies to want to make sure our governments are accountable”[12]

Also read on I’MTech

digital metamorphosis

Sociology and philosophy combine to offer a better understanding of the digital metamorphosis

Intellectual, professional, political, personal, private: every aspect of our lives is affected by technological developments that are transforming our society in a profound way. These changes raise specific challenges that require a connection between the empirical approaches of sociology and philosophical questioning. Pierre-Antoine Chardel, a philosopher, social science researcher and specialist in ethics at Institut Mines-Telecom Business School, answers our questions about socio-philosophy and the opportunities this field opens up for an analysis of the digital metamorphosis.

 

In what ways do the current issues surrounding the deployment of technology require an analytical approach combining the social sciences and philosophy?

The major philosophical questions about a society’s development, and specifically the technological aspect, must consider the social, economic and cultural contexts in which the technology exists. For example, artificial intelligence technologies do not raise the same issues when used for chatbots as they do when used for personal assistance robots; and they are not perceived the same way in different countries and cultural contexts. The sociological approach, and the field studies it involves, urges us to closely consider the contexts which are constantly redefining human-machine interactions.

Sociology is complemented by a philosophical approach that helps to challenge the meaning of our political, social and industrial realities in light of the lifestyles they produce (in both the medium and long-term). And the deployment of technology raises fundamental philosophical issues: to what extent does it support the development of new horizons of meaning and how does it enrich (or weaken) community life? More generally, what kind future do we hope our technological societies will have?

For several years now, I have been exploring socio-philosophical perspectives with other researchers in the context of the LASCO IdeaLab, in collaboration with the Institut Interdisciplinaire d’Anthropologie du Contemporain, a CNRS/EHESS joint research unit, or with the Chair on Values and Policies of Personal Information. These perspectives offer a way to anchor issues related to changes in subjectivation processes, public spheres and social imagery in very specific sociological environments. The idea is not to see the technology simply as an object in itself, but to endeavor to place it in the social, economic, political and industrial environments where it is being deployed. Our goal is to remain attentive to what is happening in our rapidly-changing world, which continues to create significant tension and contradictions. At the same time, in society we see a high demand for reflexivity and a need to distance ourselves from technology

What form can this demand for meaning take, and how can it be described from a socio-philosophical point of view?

This demand for meaning can be perceived through phenomena which reveal people’s difficulties in finding their way in a world that is increasingly structured with the constant pressure of time. In this regard, the burn-out phenomenon is very revealing, pointing to a world that creates patterns of constant mobilization. This phenomenon is only intensified through digital interactions when they are not viewed with a critical approach. We therefore witness situations of mental burn-out. Just like natural resources, human and cognitive resources are not inexhaustible. This observation coincides with a desire to distance ourselves from technology, which leads to the following questions: How can we care for individuals in the midst of the multitude of interactions that are made possible in our modern-day societies? How can we support more reasonable and peaceful technological practices?

You say “digital metamorphosis” when referring to the deployment of technology in our societies. What exactly does this idea of metamorphosis mean?

We are currently witnessing processes of metamorphosis at work in our societies. From a philosophical point of view, the term metamorphosis refers to the idea that individually and collectively we are working to build who we are, as we accept to be constantly reinvented, by using a creative approach in developing our identities. Today, we no longer develop our subjectivity based on stable criteria, but based on opportunities which have increased with digitization. This has increased the different ways we exist in the world, our ways of presenting ourselves, by using online networks, for example. Think of all the different identities that we can have on social networks and the subjectivation processes that these processes create. On the other hand, the hyper-memory that makes digital technology possible tends to freeze the representation we have of a person. The question is, can people be reduced to the data they produce? Are we reducible to our digital traces? What do they really say about us?

What other major phenomena accompany this digital metamorphosis?

Another phenomenon produced by the digital metamorphosis is that of transparency. As Michel Foucault said, the modern man has become a “confessing animal”. We can easily pursue this same reflection today in considering how we now live in societies where all of our activities could potentially be tracked. This transparency of every moment raises very significant questions from a socio-philosophical and ethical point of view related to the right to be forgotten and the need for secrecy.

But does secrecy still mean anything today? Here we realize that certain categories of thought must be called into question in order to change the established conceptual coordinates: what does the right to privacy and to secrecy really mean when all of our technology makes most of our activities transparent, with our voluntary participation? This area involves a whole host of socio-philosophical questions. Another important issue is the need to emphasize that we are still very ethno-centric in our understanding of our technological environments. We must therefore accept the challenge of coming into close contact with different cultural contexts in order to open up our perspectives of interpretation, with the goal of decentering our perception of the problems as much as possible.

How can the viewpoints of different cultures enrich the way we see the “digital metamorphosis”?

The contexts in which different cultures have taken ownership of technology varies greatly based on the country’s history. For example, in former East Germany, privacy issues are addressed very differently than they are in North American, which has not suffered from totalitarian regimes. On a completely different note, the perception we have of robotics in Western culture is very different from the prevalent perception in Japan. These concepts are not foreign to Buddhist and Shinto traditions, since they believe that objects can have a soul. The way people relate to innovations in the field of robotics is therefore very different depending on the cultural context and it involves unique ethical issues.

In this sense, a major principle in our seminar devoted to “Present day socio-philosophy” is to emphasize that the complexities of today’s world push us to question the way we can philosophically understand them while resisting the temptation to establish them within a system. Finally, most of the crises we face (whether economic, political or ecological) force us to think about the epistemological and methodological issues surrounding our theoretical practices in order to question the foundations of these processes and their presuppositions. We therefore wish to emphasize that, more than ever, philosophy must be conducted in close proximity to human affairs, by creating a wealth of exchanges with the social sciences (socio-anthropology, socio-history and socio-economy in particular). This theoretical and practical initiative corresponds to a strong demand from students and young researchers as well as many in the corporate world.

 

MeMAD

Putting sound and images into words

Projets européens H2020Can videos be turned into text? MeMAD, an H2020 European project launched in January 2018 and set to last three years, aims to do precisely that. While such an initiative may seem out of step with today’s world, given the increasingly important role video content plays in our lives, in reality, it addresses a pressing issue of our time. MeMAD strives to develop technology that would make it possible to fully describe all aspects of a video, from how people move, to background music, dialogues or how objects move in the background etc. The goal: create a multitude of metadata for a video file so that it is easier to search for in databases. Benoît Huet, an artificial intelligence technologies researcher at EURECOM — one of the project partners — talks to us in greater detail about MeMAD’s objectives and the scientific challenges facing the project. 

 

Software that automatically describes or subtitles videos already exists. Why devote a Europe-wide project such as MeMAD to this topic?

Benoît Huet: It’s true that existing applications already address some aspects of what we are trying to do. But they are limited in terms of usefulness and effectiveness. When it comes to creating a written transcript of the dialogue in videos, for example, automatic software can make mistakes. If you want correct subtitles you have to rely on human labor which has a high cost. A lot of audiovisual documents aren’t subtitled because it’s too expensive to have them made. Our aim with MeMAD is, first of all, to go beyond the current state of the art for automatically transcribing dialogue and, furthermore, to create comprehensive technology that can also automatically describe scenes, atmospheres, sounds, and name actors, different types of shots etc. Our goal is to describe all audiovisual content in a precise way.

And why is such a high degree of accuracy important?

BH: First of all, in its current form audiovisual content is difficult to access for certain communities, such as the blind or visually impaired and individuals who are deaf or hard of hearing. By providing a written description of a scene’s atmosphere and different sounds, we could enhance the experience for individuals with hearing problems as they watch a film or documentary. For the visually impaired, the written descriptions could be read aloud. There is also tremendous potential for applications for creators of multimedia content or journalists, since fully describing videos and podcasts in writing would make it easier to search for them in document archives. Descriptions may also be of interest for anyone who wants to know a little bit about a film before watching it.”

The National Audiovisual Institute (INA), one of the partner’s projects, possesses extensive documentary and film archives. Can you explain exactly how you are working with this data?

BH: At EURECOM we have two teams involved in the MeMAD project who are working on these documents. The first team focuses on extracting information. It uses technology based on deep neural networks to recognize emotions, analyze how objects and people move, the soundtrack etc.  In short, everything that creates the overall atmosphere. The scientific work focuses especially on developing deep neural network architectures to extract the relevant metadata from the information contained in the scene. The INA also provides us with concrete situations and the experience of their archivists to help us understand which metadata is of value in order to search within the documents. And at the same time, the second team focuses on knowledge engineering. This means that they are working on creating well-structured descriptions, indexes and everything required to make it easier for the final user to retrieve the information.

What makes the project challenging from a scientific perspective?

BH: What’s difficult is proposing something comprehensive and generic at the same time. Today our approaches are complete in terms of quality and relevance of descriptions. But we always use a certain type of data. For example, we know how to train the technology to recognize all existing car models, regardless of the angle of the image, lighting used in the scene etc. But, if a new car model comes out tomorrow, we won’t be able to recognize it, even if it is right in front of us. The same problem exists for political figures or celebrities. Our aim is to create technology that works not only based on documentaries and films of the past, but that will also able to understand and recognize prominent figures in documentaries of the future. This ability to progressively increase knowledge represents a major challenge.

What research have you drawn on to help meet this scientific challenge?

BH: We have over 20 years of experience in research on audiovisual content to draw on. This justifies our participation in the MeMAD project. For example, we have already worked on creating automatic summaries of videos. I recently worked with IBM Watson to automatically create a trailer for a Hollywood film. I am also involved in the NexGenTV project along with Raphaël Troncy, another contributor to the MeMAD project. With NexGenTV, we’ve demonstrated how to automatically recognize the individuals on screen at a given moment. All of this has provided us with potential answers and approaches to meet the objectives of MeMAD.

Also read on I’MTech

cost of cyber-attacks

Cybersecurity: high costs for companies

Hervé Debar, Télécom SudParis – Institut Mines-Télécom, Université Paris-Saclay

[divider style=”normal” top=”20″ bottom=”20″]

[dropcap]T[/dropcap]he world of cybersecurity has changed drastically over the past 20 years. In the 1980s, information systems security was a rather confidential field with a focus on technical excellence. The notion of financial gain was more or less absent from attackers’ motivations. It was in the early 2000s that the first security products started to be marketed: firewalls, identity or event management systems, detection sensors etc. At the time these products were clearly identified, as was their cost, which was high at times. Almost twenty years later, things have changed: attacks are now a source of financial gain for attackers.

What is the cost of an attack?

Today, financial motivations are usually behind attacks. An attacker’s goal is to obtain money from victims, either directly or indirectly, whether through requests for ransom (ransomware), or denial of service. Spam was one of the first ways to earn money by selling illegal or counterfeit products. Since then, attacks on digital currencies such as bitcoin have now become quite popular. Attacks on telephone systems are also extremely lucrative in an age where smartphones and computer technology are ubiquitous.

It extremely difficult to assess the cost of cyber-attacks due to the wide range of approaches used. Information from two different sources can however provide insight to estimate the loss incurred: that of service providers and that of the scientific community.

On the service provider side, a report by American service provider Verizon entitled, “Data Breach Investigation Report 2017” measures the number of records compromised by an attacker during an attack but does not convert this information into monetary value. Meanwhile, IBM and Ponemon indicate an average cost of $141 US per record compromised, while specifying that this cost is subject to significant variations depending on country, industrial sector etc. And a report published by Accenture during the same period assesses the average annual cost of cybersecurity incidents as approximately $11 million US (for 254 companies).

How much money do the attackers earn?

In 2008, American researchers tried to assess the earnings of a spam network operator. The goal was to determine the extent to which an unsolicited email could lead to a purchase. By analyzing half a billion spam messages sent by two networks of infected machines (botnet), the authors estimated that the hackers who managed the networks earned $3 million US. However, the net profit is very low. Additional studies have shown the impact of cyber attacks on the cost of shares of corporate victims. This cybersecurity economics topic has also been developed as part of a Workshop on the Economics of Information Security.

The figures may appear to be high, but as is traditionally the case for Internet services, attackers benefit from a network effect in which the cost of adding a victim is low, but the cost of creating and installing the attack is very high. In the case studied in 2008, the emails were sent using the Zeus robots network. Since this network steals computing resources from the compromised machines, the initial cost of the attack was also very low.

In short, the cost of cyberattacks has been a topic of study for many years now. Both academic and commercial studies exist. Nevertheless, it remains difficult to determine the exact cost of cyber-attacks. It is also worth noting that it has historically been greatly overestimated.

The high costs of defending against attacks

Unfortunately, defending against attacks is also very expensive. While an attacker only has to find and exploit one vulnerability, those in charge of defending against attacks have to manage all possible vulnerabilities. Furthermore, there is an ever-growing number of vulnerabilities discovered every year in information systems. Additional vulnerabilities are regularly introduced by the implementation of new services and products, sometimes unbeknownst to the administrators responsible for a company network. One such case is the “bring your own device” (BYOD) model. By authorizing employees to work on their own equipment (smartphones, personal computers) this model destroys the  perimeter defense that existed a few years ago. Far from saving companies money, it introduces an additional dose of vulnerability.

The cost of security tools remains high as well. Firewalls or detection sensors can cost as much as €100,000 and the cost of a monitoring platform to manage all this security equipment can cost up to ten times as much. Furthermore, monitoring must be carried out by professionals and there is a shortage of these skills in the labor market. Overall, the deployment of protection and detection solutions amounts to millions of euros every year.

Moreover, it is also difficult to determine the effectiveness of detection centers intended to prevent attacks because we do not know the precise number of failed attacks. A number of initiatives, such as Information Security Indicators, are however attempting to answer this question. One thing is certain: every day information systems can be compromised or made unavailable, given the number of attacks that are continually carried out on networks. The spread of the malicious code Wannacry proved how brutal certain attacks can be and how hard it can be to predict their development.

Unfortunately, the only effective defense is often updating vulnerable systems once flaws have been discovered. This creates few consequences for a work station, but is more difficult on servers, and can be extremely difficult in high-constraint environments (critical servers, industrial protocols etc.) These maintenance operations always have a hidden cost, linked to the unavailability of the hardware that must be updated. And there are also limitations to this strategy. Certain updates are impossible to implement, as is the case with Skype, which requires a major software update and leads to uncertainty in its status. Other updates can be extremely expensive, such as those used to correct the Spectre and Meltdown vulnerabilities that affect the microprocessors of most computers. Intel has now stopped patching the vulnerability in older processors.

A delicate decision

The problem of security comes down to a rather traditional risk analysis, in which an organization must decide which risks to protect itself against, how subject it is to risks, and which ones it should insure itself against.

In terms of protection, it is clear that certain filtering tools such as firewalls are imperative in order to preserve what is  left of the perimeter. Other subjects are more controversial, such as Netflix’s abandoning of anti-virus and decision to rely instead on massive data analysis to detect cyber-attacks.

It is very difficult to assess how subject a company is to risks since they are often the result of technological advances in vulnerabilities and attacks rather than a conscious decision made by the company. Attacks through denial of service, like the one carried out in 2016 using the Mirai malware, for example, are increasingly powerful and therefore difficult to counter.

The insurance strategy for cyber-risk is even more complicated, since premiums are extremely difficult to calculate. Cyber-risk is often systematic since a flaw can affect a large number of clients. Unlike the risk of natural catastrophe, which is limited to a region, allowing insurance companies to spread the risk out over its various clients and calculate a future risk based on risk history, computer vulnerabilities are often widespread, as can be seen in recent examples such as the Meltdown, Spectre and Krack flaws. Almost all processors and WiFi terminals are vulnerable.

Another aspect that makes it difficult to estimate risks is that vulnerabilities are often latent, which means that only a small community is aware of them. The flaw used by the Wannacry malware had already been identified by NSA, the American Security Agency (under the name EternalBlue). The attackers who used the flaw learned about its existence from documents leaked from the American government agency itself.

How can security be improved? The basics are still fragile

Faced with a growing number of vulnerabilities and problems to solve, it seems essential to reconsider the way internet services are built, developed and operated. In other industrial sectors the answer has been to develop standards and certify products in relation to these standards. This means guaranteeing smooth operations, often in a statistical manner. The aeronautics industry, for example, certifies its aircraft and pilots and has very strong results in terms of safety. In a more closely-related sector, telephone operators in the 1970s guaranteed excellent network reliability with a risk of service disruption lower than 0.0001 %.

This approach also exists in the internet sector with certifications based on common criteria. These certifications often result from military or defense needs. They are therefore expensive and take a long time to obtain, which is often incompatible with the speed-to-market required for internet services. Furthermore, standards that could be used for these certifications are often insufficient or poorly suited for civil settings. Solutions have been proposed to address this problem, such as the CSPN certification defined by the ANSSI (French National Information Systems Security Agency). However, the scope of the CSPN remains limited.

It is also worth noting the consistent positioning of computer languages in favor of quick, easy production of computer code. In the 1970s languages that chose facility over rigor came into favor. These languages may be the source of significant vulnerabilities. The recent PHP case is one such example. Used by millions of websites, it was one of the major causes of SQL injection vulnerabilities.

The cost of cybersecurity, a question no longer asked

In strictly financial terms, cybersecurity is a cost center that directly impacts a company or administration’s operations. It is important to note that choosing not to protect an organization against attacks amounts to attracting attacks since it makes the organization an easy target. As is often the case, it is therefore worthwhile to provide a reminder about the rules of computer hygiene.

The cost of computer flaws is likely to increase significantly in the years ahead. And more generally, the cost of repairing these flaws will rise even more dramatically. We know that the point at which an error is identified in a computer code greatly affects how expensive it is to repair it: the earlier it is detected, the less damage is done. It is therefore imperative to improve development processes in order to prevent programming errors from quickly becoming remote vulnerabilities.

IT tools are also being improved. Stronger languages are being developed. These include new languages like RUST and GO, and older languages that have come back into fashion, such as SCHEME. They represent stronger alternatives to the languages currently taught, without going back to languages as complicated as ADA for example. It is essential that teaching practices progress in order to factor in these new languages.

The Conversation Wasted time, stolen or lost data…We have been slow to recognize the loss of productivity caused by cyber-attacks. It must be acknowledged that cybersecurity now contributes to a business’s performance. Investing in effective IT tools has become an absolute necessity.

 

Hervé Debar, Head of Department Networks and Telecommunications services, Télécom SudParis – Institut Mines-Télécom, Université Paris-Saclay

The original version of this article (in French) was published on The Conversation.

Also read on I’MTech:

[one_half]

[/one_half][one_half_last]

[/one_half_last]