GDPR

GDPR comes into effect. Now it’s time to think about certification seals!

The new European Personal Data Protection Regulation (GDPR) comes into effect on May 25. Out of the 99 articles contained in the regulation, two are specifically devoted to the question of certification. While establishing seals to demonstrate compliance with the regulation seems like a good idea in order to reassure citizens and economic stakeholders, a number of obstacles stand in the way.

 

Certification marks are ubiquitous these days since they are now used for all types of products and services. As consumers, we have become accustomed to seeing them everywhere: from the organic farming label for products on supermarket shelves to Energy certification for appliances. They can either be a sign of compliance with legislation, as is the case for CE marking, or a sign of credibility displayed by a company to highlight its good practices. While it can sometimes be difficult to make sense of the overwhelming number of seals and marks that exist today, some of them represent real value. AOC appellations, for example, are well-known and sought out by many consumers. So, why not create seals or marks to display responsible personal data management?

While this may seem like an odd question to citizens who see these seals as nothing more than red labels on free-range chicken packaging, the European Union has taken it into consideration. So much so, that Articles 42 and 43 of the new European Data Protection Regulation (GDPR) are devoted to this idea. The creation of seals and marks is encouraged by the text in order to enable companies established in the EU who process citizens’ data responsibly to demonstrate their compliance with the regulation. On paper, everything points to the establishment of clear signs of trust in relation to personal data protection.

However, a number of institutional and economic obstacles stand in the way.  In fact, the question of seals is so complicated that IMT’s Personal Data Values and Policies Chair* (VPIP) has made it a separate research topic, especially in terms of how the GDPR affects the issue. This research, carried out between the adoption of the European text on April 14, 2016 and the date it is set to come into force, May 25, 2018, has led to the creation of a work of more than 230 pages entitled Signes de confiance : l’impact des labels sur la gestion des données personnelles (Signs of Trust — the impact of seals on personal data management).

For Claire Levallois-Barth, a researcher in Law at Télécom ParisTech and coordinator of the publication, the complexity stems in part from the number and heterogeneity of personal data protection marks. In Europe alone, there are at least 75 different marks, with a highly uneven geographic distribution. “Germany alone has more than 41 different seals,” says the researcher. “In France, we have nine, four of which are granted by the CNIL (National Commission for Computer Files and Individual Liberties).” Meanwhile, the United Kingdom has only two and Belgium only one. Each country has its own approach, largely for cultural reasons. It is therefore difficult to make sense of such a disparate assortment of marks with very different meanings.

Seals for what?

Because one of the key questions is: what should the seal describe? Services? Products? Processes within companies? “It all depends on the situation and the aim,” says Claire Levallois-Barth. Until only recently, the CNIL granted the “digital safe box” seal to certify that a service respected “the confidentiality and integrity of data that is stored there” according to its own criteria. At the same time, the Commission also has a “Training” seal that certifies the quality of training programs on European or national legislative texts. Though both were awarded by the same organization they do not have the same meaning. So saying that a company has been granted “a CNIL seal” provides little information. One must delve deeper into the complexity of these marks to understand what they mean, which seems contradictory to the very principle of simplification they are intended to represent.

One possible solution could be to create general seals to encompass services, internal processes and training for all individuals responsible for data processing at an organization. However, this would be difficult from an economic standpoint. For companies it could be expensive — or even very expensive — to have their best practices certified in order to receive a seal. And the more services and teams there are to be certified, the more time and money companies would have to spend to obtain this certification.

On March 31, 2018, the CNIL officially transitioned from a labeling activity to a certification activity.

The CNIL has announced that it would stop awarding seals for free. “The Commission has decided that once the GDPR comes into effect it will concentrate instead on developing or approving certification standards. The seals themselves will be awarded by accredited certification organizations,” explains Claire Levallois-Barth. Afnor Certification or Bureau Veritas, for example, could offer certifications for which companies would have to pay. This would allow them to cover the time spent assessing internal processes and services, analyzing files, auditing information systems etc.

And for all the parties involved, the economic profitability of certification seems to be the crux of the issue. In general, companies do not want to spend tens of thousands, or even hundreds of thousands, of euros on certification just to receive a little-known seal. Certification organizations must therefore find the right formula: comprehensive enough to make the seal valuable, but without representing too much of an investment for most companies.

While it seems unlikely that a general seal will be created, some stakeholders are examining the possibility of creating sector-specific seals based on standards recognized by the GDPR, for cloud computing for example. This could occur if criteria were approved, either at the national level by a competent supervisory authority within a country (the CNIL in France), or at the European Union level by the European Data Protection Board (EDPB). A critical number of seals would then have to be granted. GDPR sets out two options for this as well.

According to Article 43 of the GDPR, certification may either be awarded by the supervisory authorities of each country, or by private certification organizations. In France, the supervisory authority is the CNIL, and certification organizations include Afnor and Bureau Veritas. These organizations are themselves monitored. They must be accredited either by the supervisory authority, or by the national accreditation body, which is the COFRAC in France.

This naturally leads to the question: if the supervisory authorities develop their own sets of standards, will they not tend to favor the accreditation of organizations that use these standards? Eric Lachaud, a PhD student in Law and Technology at Tilburg and guest at the presentation of the work by the Personal Data Values and Policies Chair on March 8, says, “this clearly raises questions about competition between the sets of standards developed by the public and private sectors.” Sophie Nerbonne, Director of Compliance at the CNIL, who was interviewed at the same event, says that the goal of the national commission is “not to foreclose the market but to draw on [its] expertise in very precise areas of certification, by acting as a data protection officer.”

A certain vision of data protection

It should be acknowledged, however that the area of expertise of a supervisory authority such as the CNIL, a pioneer in personal data protection in Europe, is quite vast. Beyond serving as a data protection officer and being responsible for ensuring compliance with GDPR within an organization that has appointed it, as an independent authority CNIL is in charge of regulating issues involving personal data processing, governances and protection, as indicated by the seals it has granted until now. Therefore, it is hard to imagine that the supervisory authorities would not emphasize their large area of expertise.

And even more so since not all the supervisory authorities are as advanced as the CNIL when it comes to certification in relation to personal data. “So competition between the supervisory authorities of different countries is an issue,” says Eric Lachaud. Can we hope for a dialogue between the 28 Member States of the European Union in order to limit this competition? “This leads to the question of having mutual recognition between countries, which has still not been solved,” says the Law PhD student. As Claire Levallois-Barth is quick to point out, “there is a significant risk of ‘a race to the bottom’.” However, there would be clear benefits. By recognizing the standards of each country, the countries of the European Union have the opportunity to give certification a truly transnational dimension, which would make the seals and marks valuable throughout Europe, thereby making them shared benchmarks for the citizens and companies of all 28 countries.

The high stakes of harmonization extend beyond the borders of the European Union. While the CE standard is criticized at times for how easy it is to obtain in comparison to stricter national standards, it has successfully imposed certain European standards around the world.  Any manufacturer that hopes to reach the 500 million-person market that the European Union represents must meet this standard. For Éric Lachaud, this provides an example of what convergence between the European Member States can lead to: “We can hope that Europe will reproduce what it has done with CE marking: that it will strive to make the voices of the 28 states heard around the world and to promote a certain vision of data protection.”

The uncertainties surrounding the market for seals must be offset by the aims of the GDPR. The philosophy of this regulation is to establish strong legislation for technological changes with a long-term focus. In one way, Articles 42 and 43 of the GDPR can be seen as a foundation for initiating and regulating a market for certification. The current questions being raised then represent the first steps toward structuring this market. The first months after the GDPR comes into effect will define what the 28 Member States intend to build.

 

*The Personal Data Values and Policies Chair brings together the Télécom ParisTech, Télécom SudParis graduate schools, and Institut Mines-Télécom Business School. It is supported by Fondation Mines-Télécom.

[box type=”info” align=”” class=”” width=””]

Personal data certification seals – what is the point?

For companies, having a personal data protection seal allows them to meet the requirements of accountability imposed by article 24 of the GDPR. It requires all organizations responsible for processing data to be able to demonstrate compliance with the regulation. This requirement also applies to personal data subcontractors.

This is what leads many experts to think that the primary application for seals will be business-to-business relationships rather than business-to-consumer relationships. SME economic stakeholders could seek certification in order to meet growing demand amongst their customers, especially major firms, for compliance in their subcontracting operations.

Nevertheless, the GDPR is a European regulation. This means that compliance is assumed: all companies are supposed to abide by the regulation as soon as it comes into effect. A compliance seal cannot therefore be used as a marketing tool. It is, however, likely that the organizations responsible for establishing certification standards will choose to encourage seals that go beyond the requirements of the GDPR. In this case, stricter control over personal data processing than what is called for by the legislation could be a valuable way to set a company apart from its competitors. [/box]

Q4Health

Q4Health: a network slice for emergency medicine

Projets européens H2020How can emergency response services be improved? The H2020 Q4Health project raised this question. The European consortium that includes EURECOM, the University of Malaga and RedZinc has demonstrated the possibility to relay video between first responders at an emergency scene and doctors located remotely. To do so, the researchers had to develop innovative tools for 4G network slicing. This work has paved the way for applications for other types of services and lays the groundwork for the 5G.

 

Doctors are rarely the first to intervene in emergency situations. In the event of traffic accidents, strokes or everyday accidents and injuries, victims first receive care from nearby witnesses. The response chain is such that citizens then usually hand the situation over to a team of trained first responders — which does not necessarily include a doctor — who then bring the victim to the hospital. But before the patient reaches the doctor for a diagnosis, time is precious. Patients’ lives depend on medical action being taken as early as possible in this chain. The European H2020 Q4Health project studied a video streaming solution to provide doctors with real-time images of victims at the emergency scene.

The Q4Health project, which was started in January 2016 and completed in December 2017, had to face the challenge of ensuring that the video flow was of high enough quality to make a diagnosis. To this end, the project consortium which includes EURECOM, the University of Malaga in Spain and the project leader SME RedZinc, proved the feasibility of programming a mobile 4G network that can be virtually sliced. The network “slice” created therefore includes all the functions of the regular network, from its structural portion (antennas) to its control software. It is isolated from the rest of the network, and is reserved for communication between emergency response services and nearby doctors.

Navid Nikaein, a communication systems researcher at EURECOM sates that “The traditional method of creating a network slice consists of establishing a contract with an operator who guarantees the quality of service for the slice“. But there is a problem with this sort of system: emergency response services do not have complete control over the network; they remain dependent on the operator. “What we have done with Q4Health is to give real control to emergency response services over inbound and outbound data traffic,” adds the researcher.

Controlling the network

In order to carry out this demonstration, the researchers developed application programming interfaces (API) for the infrastructure network (the central portion of the internet, that interconnects all the other access points) and the mobile network that connects 4G devices, such as telephones, to an access point (this is referred to as an access network). These programming interfaces allow emergency response services to define priority levels for their members. The service can use the SIM card associated with a firefighter or paramedic’s professional mobile phone to identify the user’s network connection. Via the API, it has been determined that the paramedic would benefit from privileged access to the network, enabling dynamic use of the slice reserved for emergency services.

In the Q4Health project, this privileged access for first responders allows them to stream video independent of data traffic in the area, which is a great advantage in crowded areas. Without such privileged access, in a packed stadium, for example it would be impossible to transmit high-quality video over a 4G network. And to ensure the quality of the video flow, a system analyzes the radio rate between the antenna and the first responders’ device — for the Q4Health project, this is not necessarily a smartphone but glasses equipped with a camera to facilitate emergency care. The video rate is then adjusted depending on the radio rate. “If there is a lower radio rate, video processing is optimized to prevent deterioration of image quality,” explains Navid Nikaein.

Through this system first responders are able to give doctors a real-time view of the situation. These may be doctors at the hospital to which the patient will be transported, or volunteer doctors nearby who are available to provide emergency assistance. They obtain not only visual information about the victim’s condition, which facilitates diagnosis, but also gain a better understanding of the circumstances of the accident by observing the scene. They can therefore guide non-physician responders through delicate actions, or even allow them to perform treatment which could not be carried out without consent from a doctor.

Beyond its medical application, Q4Health has above all proved the feasibility of network slicing through a control protocol in which the service provider, rather than the operator, has control. This demonstration is of particular interest for the development of the 5G network, which will require network slicing. “As far as I know, the tool we have developed to achieve this result is one of the first of its kind in the world,” notes Navid Nikaein. And highlighting these successful results, achieved in part thanks to EURECOM’s OpenAirInterface and Mosaic5G platforms, the researcher adds, “Week after week, we are increasingly contacted about using these tools,” This has opened up a wide range of prospects for use cases, representing opportunities to accelerate 5G prototyping. In addition to emergency response services, many other sectors could be interested in this sort of network slicing, starting with security services or transport systems.

 

TeraLab, a European data sanctuary

Projets européens H2020Over the course of three months, TeraLab was involved in two European H2020 projects on the industry of the future: MIDIH and BOOST 4.0. This confirms the role played by TeraLab—IMT’s big data and artificial intelligence platform—as a trusted third party and facilitator of experimentation. TeraLab created a safe place for these projects, far from competitive markets, where industry stakeholders could accept to share their data.

 

Data sharing is the key to opening up research in Europe,” says Anne-Sophie Taillandier. According to the director of TeraLab—IMT’s big data and AI platform—a major challenge exists in the sharing of data between industrial players and academics. For SMEs and research institutions, having access to industrial data means working on real economic and professional problems.  This is an excellent opportunity for accelerating prototypes and proofs of concept and removing scientific barriers. Yet for industrial stakeholders, the owners of the data, this sharing must not compromise security. “They want guarantees,” says the Director, who was ranked last February among the top 20 individuals driving AI in France by French business magazine L’Usine Nouvelle.

It is in this perspective of offering guarantees that the TeraLab platform joined the consortia of two European projects from the H2020 program: BOOST 4.0 (January 2018) and MIDIH (October 2017). The first project brought together 50 industrial and academic partners, including 13 pilot plants in Europe. The project is intended to create a replicable model of a smart industry in which data would form the basis for reflections on operational efficiency, user experience and even the creation of a business model. This level of ambition requires significant work on interoperability, security and data sharing. “But it is clear that Volvo and Volkswagen, both members of the Boost 4.0 consortium, will not provide access to their data without first experiencing a certain level of trust,” explains Anne-Sophie Taillandier. A platform like TeraLab allows companies to benefit from technological and legal advantages that make it a safe workspace.

The MIDIH project, on the other hand, seeks to provide companies with the technological, financial and material resources required for developing innovative solutions for the industry of the future through sub-grants. “In practical terms, the H2020 project will finance calls for projects on logistics, predictive maintenance and steel cutting and will offer support to successful applicants,” the TeraLab director explains. The companies selected through these calls for projects will be able to develop proofs of concept for solving industrial problems experienced by SMEs. They use platforms like TeraLab to accomplish this, since they “provide the assurance of the sovereignty and cybersecurity of the data the prototypes will produce.” For these companies, the ability to use an independent platform of this magnitude is truly beneficial in accelerating their projects.

A platform recognized at European level

TeraLab’s involvement in these projects is also due to the recognition it has earned at European level. In 2016, the Big Data Value Association (BDVA) granted TeraLab its Silver i-Space Label. This recognition is far from trivial, since BVDA leads the European private-public partnership on big data. BOOST 4.0 is the result of reflection carried out by this same partnership, which works to advance the major issues that industrial stakeholders have presented to the European Commission. “The context of the European Commission is incredible because many different stakeholders gravitate there, but within a given theme, everyone knows each other,” Anne-Sophie Taillandier admits. “The Silver i-Space Label awarded in 2016 provided both recognition from big data stakeholders and strategic positioning within this environment.”

In Europe, few platforms like TeraLab exist. Only ten hold the Silver i-Space Label—the highest level—held by TeraLab, the only French awardee of this recognition. It therefore represents a valuable gateway to involvement in European projects. “It legitimizes our responses to calls for bids such as these two projects on industry 4.0,” says the Director of the platform. The industry of the future is a topic TeraLab had already worked on before joining the MIDIH and BOOST 4.0 projects. “One of our strengths, which was recognized by both consortia, was our ability to develop a community of researchers and innovators on this subject,” says Anne-Sophie Taillandier. She also reminds us that the industry is not the only theme TeraLab has explored in the context of in-depth projects. This offers good prospects for TeraLab to be involved in other European projects on other specialized areas, such as healthcare.

 

Soft Landing

Soft Landing: A partnership between European incubators for developing international innovation

Projets européens H2020How can European startups be encouraged to reach beyond their countries’ borders to develop internationally? How can they come together to form new collaborations? The Soft Landing project, in which business incubator IMT Starter is participating, allows growing startups and SMEs to discover the ecosystems of different European incubators. The goal is to offer them support in developing their business internationally. 

 

Europe certainly acknowledges the importance of each country developing its own ecosystem of startups and SMEs, yet each ecosystem is developing independently,” explains Augustin Rads, business manager at IMT Starter. The Soft Landing project, which receives funding from the European Union’s Horizon 2020 program, seeks to find a solution to this problem. “The objective is, on the one hand to promote exchanges between the different startup and SME ecosystems, and on the other hand to provide these companies with a more global vision of the European market beyond their borders,” he explains.

Soft Landing resulted from collaboration between five European incubators: Startup Division in Lithuania, Crosspring Lab in the Netherlands, GTEC in Germany,  F6S Network in the UK, and IMT Starter, the incubator run by Télécom SudParis and Télécom École de Management in Évry, France. As part of the project, each of these stakeholders must first discover the startup and SME ecosystems developing in their partners’ countries. Next, interested startups that see a need for this support will be able to temporarily join an incubator abroad, for a limited period.

 

Discovering each country’s unique characteristics

Over the course of the two-year project, representatives from each country will visit partner incubators to discover and learn about the startup ecosystem that is developing there. The representatives are also seeking to identify specific characteristics, skills, and potential markets in each country that could interest startups in their own country. “Each country has its specific areas of interest: the Germans work a lot on the theme of the industry, whereas in the Netherlands and Lithuania, the projects are more focused on FinTech, “Augustin Radu adds. “At IMT Starter, we are more focused on information technologies.”

Once they have completed these discovery missions, the representatives will return to their countries’ startups to present the potential opportunities. “At IMT Starter, we have planned a mission in Germany in March, another in the Netherlands in April, in May we will host a foreign representative, and in June we will go to Lithuania,” Augustin Radu explains. “There may be other missions outside the European Union as well, in the Silicon Valley and in India.

 

Hosting foreign startups in the incubators

Once each incubator’s specific characteristics and possibilities have been defined, the startups can request to be hosted by a partner ecosystem for a limited period. “As an incubator, we will host startups that will benefit from our customized support.” says Augustin Radu. “They will be able to move into our offices, take advantage of our network of industrial partners, and work with our researchers and laboratories. The goal is to help them find talent to help grow their businesses.

Of course, there is a selection process for startups that want to join an incubator,” the business manager adds. “What are their specific needs? Does this match the host country’s areas of specialization?” In addition, the startup or SME should ideally have an advanced level of maturity, be well rooted in its country of origin and have a product that is already finalized. According to Augustin Radu, these are the prerequisites for a company to benefit from this opportunity to continue its development abroad.

 

Remove barriers that separate startups and research development

While all four of the partner structures are radically different, they are all very well-rooted in their respective countries,” the business manager explains. IMT Starter is in fact the only incubator participating in this project that is connected to a higher education and research institution, IMT. A factor that Augustin Radu believes will greatly enhance the French incubator’s visibility.

In addition to fostering the development of startups abroad, the Soft Landing project also removes barriers between companies and the research community by proposing that researchers at schools associated with IMT Starter form partnerships with the young foreign companies. “Before this initiative, it was difficult to imagine a French researcher working with a German startup! Whereas today, if a young European startup joins our incubator because it needs our expertise, it can easily work with our laboratories.”

The project therefore represents a means of accelerating the development of innovation, both by building bridges between the research community and the startup ecosystem, as well as by pushing young European companies to seek an international presence. “For those of us in the field of information technology, if we don’t think globally we won’t get anywhere!” Augustin Radu exclaims. “When I see that in San Francisco, companies immediately think about exporting outside the USA, I know our French and European startups need to do the same thing!” This is a need the Soft Landing project seeks to fulfill by broadening the spectrum of possibilities for European startups. This could allow innovations produced in the Old World to receive the international attention they deserve.

Davide Balzarotti, Eurecom, ERC, Consolidator grant

A third ERC grant in 3 years at EURECOM

Getting a grant from the European Research Council is not an easy task but this is what Davide Balzarotti, Professor in the Security Department, has just accomplished. He is the third EURECOM professor to obtain an ERC grant in the past 3 years.

 

ERC, Davide Balzarotti, BITCRUMBS, EURECOM

Davide, you just got an ERC Consolidator grant, one of the most prestigious research grants in Europe. What is your feeling today?

Everybody knows it is one of the most selective grants in Europe, so I’m obviously very proud of that. It is definitely a major step in my career. It is an important recognition for the efforts I have made to get this grant and for the relevance of the project I presented. Plus, I was told there are only 329 researchers across Europe – and 38 researchers in France – who got this grant this year, so I am particularly honoured to be one of them. I am also very happy for EURECOM since it has been awarded one ERC grant every year for the past 3 years… Considering there are only 24 professors, it is a real success!

 

Will this grant change your day-to-day life as a researcher at EURECOM?

I am sure it will! In different ways even. First, I won’t have to worry about getting money for the next few years. The Consolidator grant is a five-year grant that represents €2 million. This grant is not only generous, it also offers recognition and visibility. In fact, the two other ERC grantees at EURECOM – David Gesbert & Petros Elia – explained me that I will certainly be more solicited by the research community. It will also give me a lot of independence and creative freedom to conduct the project for which I got this grant: BITCRUMBS – Towards a Reliable and Automated Analysis of Compromised Systems. I will dedicate 70% of my time to the project but I can manage it the way I want depending on the people I will work with. I actually need to hire a team of seven researchers – five Ph.D. students and two post-docs – and one engineer. On top of that, I will be involved in the EURECOM ERC committee that helps scientists benefit from the experience of the ones who already received such grants. This committee actually helped me a lot in writing my proposal, so I look forward to helping my colleagues in return.

 

BITCRUMBS seems to be a ground-breaking project in the computer security area. Could you explain its main objective?

BITCRUMBS is actually a brand new way of addressing computer security issues. And this ERC grant will help me pursue very ambitious research objectives with this project, which covers a wide range of digital security areas. I hope our results will change the way digital security will be managed in the future. The main objective of BITCRUMBS is to rethink what we call the “incident response” (IR). It is clear that research on prevention and detection helps make devices more secure, but since a 100% secure system does not exist, improving IR can be very useful too. Incident response addresses the aftermath of a digital security breach that, if not handled properly, can lead to data breach or a system collapse. We all know the risk of security breaches is now higher than ever. Attackers frequently break into corporate networks, government services and even critical infrastructures. Almost half of computers worldwide are infected by malware. A voting machine can be altered to rig the results of an election, a connected car can be hacked to drive down a cliff or a security camera can be controlled over the Internet to spy over our houses and our families. The problem is that we do not have the tools to analyze these attacks and understand their causes! This has to change.

With BITCRUMBS, I want to give investigators the possibility to quickly verify the state of compromised systems and help citizens trust the result of computer forensic investigations. In the future, I believe we should design digital systems the way we design airplanes – secure against crashes but also equipped with black boxes to collect all the data required to support an incident investigation.

 

What is your strategy to reach this objective?

I want to propose a more scientific and comprehensive methodology to analyse compromised systems. This should be done in three steps. The first part of the project will focus on measuring the effectiveness and accuracy of the techniques currently used to analyse compromised systems, and on assessing the reliability of their data sources. This will help increase the theoretical and scientific foundations of IR techniques. The second part of the project will focus on the design and implementation of new automated analysis techniques able to cope with advanced threats and the analysis of IoT devices. These techniques will have to be robust, scalable and generic – capable of working on different classes of devices. Of course, results given by these new techniques will need to be reliable and based on a solid theoretical foundation. The last step will introduce a new forensics analysis by design methodology. My goal is to provide a set of guidelines for the design of future systems and software – to help developers provide the required information to support the analysis of compromised systems.

 

What about the scientific and technological impacts?

I hope research conducted in BITCRUMBS will have a long-lasting impact – not only scientific – on the area of incident response and on the way we analyze compromised systems. First, BITCRUMBS will bring a scientific foundation to IR, based on repeatable experiments and precise measurements of the reliability of data and techniques used in current investigations. It will also have a practical impact since it will produce open source tools and improve existing software that are regularly used by companies and law enforcement to deal with computer attacks. Last but not least, BITCRUMBS will have an impact on our society. Improving the IR process will increase the trust that citizens have in the result of digital investigations. In order to clearly show the impact of BITCRUMBS in different fields and scenarios, we will address our objectives using real case studies borrowed from traditional computer software and embedded systems.

 

What are the main challenges you will be facing in BITCRUMBS?

Like any very broad project, BITCRUMBS success depends on a lot of factors. From a scientific point of view, it mainly depends on the combination of very different research skills including memory forensics, embedded systems security, malware and binary analysis, distributed systems and operating system design and defenses. I have considerable experience in each of these research areas, but in order to reduce the risks, I already secured key collaborations with leading universities and security companies so I can find research partners from different areas to work with. The other potential risk is the possible failure to develop some of the techniques I have envisioned. It is actually a very common risk in research projects that introduce novel solutions. For this reason, for each disrupting approach I would like to develop, I also have thought of less risky techniques for which I have experience and already conducted some investigation to evaluate the feasibility of a few ideas. But above all, one of the main challenges will be to find motivated postdocs in digital security willing to work in Europe. Most PhD students go to the US for their postdoc or are hired by security companies offering good conditions and interesting opportunities. I hope BITCRUMBS challenges and potential results can attract some of them.

[divider style=”dotted” top=”15″ bottom=”15″]

The original version of this article was published on EURECOM website

[divider style=”dotted” top=”15″ bottom=”15″]

Also read on I’MTech :

Mobile World Congress 2016, market

Will 5G turn the telecommunications market upside-down?

The European Commission is anticipating the arrival of the fifth generation in mobile phones (5G) in 2020. It is expected to significantly increase data speeds and offer additional uses. However, the extent of the repercussions on the telecommunications market and on services is still difficult to evaluate, even for the experts. Some believe that 5G will be no more than a technological step up from 4G, in the same way that 4G progressed from 3G. In which case, it should not create radical change in the positions of the current economic stakeholders. Others believe that 5G has the potential to cause a complete reshuffle, stimulating the creation of new industries which will disrupt the organization among longstanding operators. Marc Bourreau sheds light on these two possibilities. He is an economist at Télécom ParisTech, and in March, co-authored a report for the Centre on Regulation in Europe (Cerre) entitled “Towards the successful deployment of 5G in Europe: What are the necessary policy and regulatory
conditions?”.

 

Can we predict what 5G will really be like?

Marc Bourreau: 5G is a shifting future. It is a broad term which encompasses the current technical developments in mobile technologies. The first of these will not reach commercial maturity until 2020, and will continue to develop afterwards, similar to the way in which 4G is still developing today. At present, 5G could go in a number of directions. But we can already imagine the likely scenarios from the positioning of economic actors and regulators.

Is seeing 5G as a simple progression from 4G one of those scenarios?  

MB: 5G partly involves improving 4G, using new frequency bands, increasing antenna density, and improving the efficiency of wireless technology to allow greater data speeds. One way of seeing 5G is indeed as an improved 4G. However, this is probably the smallest progression that can be envisaged. Under this hypothesis, the structure of the market would be fairly similar, with mobile operators keeping an economic model based on sales of 5G subscriptions.

Doesn’t this scenario worry the economic stakeholders?

MB: Not really. In this case, the regulations would not change a great deal, which would mean there would be no need for a major adaptation by the longstanding stakeholders. There may be questions over investment for the operators, for example in antennae for which the density is set to rise. They would have to find a way of financing the new infrastructure. There would perhaps also be questions surrounding installation. A large density of 5G antennae would mean that development would primarily take place in urban areas, where installing antennae poses fewer problems.

Which scenario could change the way the current mobile telecommunications market is structured?

MB: Contrary to the scenario of a simple progression, is that of a total revolution. In this case, 5G would provide network solutions for particular industries. Economically speaking, we use the term of industry “verticals”. Connected cars are a vertical, as is health, and connected objects. These sectors could develop new services with 5G. It would be a true revolution, as these verticals require access to the network and infrastructure. If a carmaker creates an autonomous vehicle, it must be able to receive and send data on a dedicated network. This means that antennae and bandwidths will need to be shared with mobile phone operators.

To what extent would this “revolution” scenario affect the market?

MB: Newcomers will not have their own infrastructure. They will therefore be virtual operators, as opposed to classical operators. They will probably have to rent access. This means the longstanding operators will have to change their economic model to incorporate this new role. In this scenario, the current operators would become network actors rather than service actors. Sharing the network like this could imply regulations to help the different actors to negotiate with each other. As each virtual operator will have different needs, the quality of service will not be identical for each vertical. The question of preserving or adapting the neutrality of the net for 5G will inevitably arise.

Isn’t the scenario of a revolution, along with new services, more advantageous?

MB: It certainly promises to use the full potential of technology to achieve many things. But it also involves risk. It could disrupt operators’ economic models, and who knows if they will be able to adapt? Will the longstanding operators be capable of investing in infrastructure which will then be open to all? An optimistic view would be to say that by opening the networks, the many services created will generate value which will, in part, come back to the operators, allowing them to finance the development of the network. But we should not forget the slightly more pessimistic view that the value might come back to the newcomers only. If this were to happen, the longstanding operators would no longer be able to invest, infrastructure would not be rolled out on a large scale, and the scenario of a revolution would not be possible.

Of the two scenarios, a “progression” or a “revolution”, is one more likely than the other?

MB: In reality, we have to see the two scenarios as an evolution in time, rather than a choice. Once 5G is launched in 2020, there will be a development margin. Technology will progress from the umbrella term ”5G”, which will bring together the pieces of basic technology. After all, each mobile generation brings about changes which consumers do not necessarily notice. When the technology is launched commercially, it will probably be more of a progression from 4G. The question is, will it then develop into the more ambitious scenario of a revolution?

What will influence the deepening role of 5G?

MB: The choice of scenario now depends on choices of normalization. Dictating the state of technology can either facilitate or place a limit on transformations in the market. Normalization is carried out in large economic areas. There are hopes of partnerships, for example between Europe and Korea, to unify standards and produce a homogeneous 5G. But we must not forget that the different economic areas can also have their own interest in sticking with a progression or opting for a revolution.

How do the interests of each economic area come into play?

MB: This technology is interesting both from an industrial point of view and a social one. Choices may be made on each of these aspects, depending on the policy preferred by an economic area. From an industrial point of view, a conservative approach to protect current actors will favor normalization. Conversely, other choices may be made, allowing new actors to emerge, which would be more of a “revolution” scenario. From a social point of view, we need to look at what the consumer wants, whether the new services created risk disrupting those currently on offer, etc.

What roles do the various stakeholders play in the decision-making process? 

MB: The choice may be decentralized to the stakeholders. Operators are in discussion and negotiation with the vertical stakeholders. I think it is worth letting this process play out, allowing it to generate experimentation. The situation is similar to the early days of mobile web, where no one knew what the right application was, or the right economic model, etc. For 5G, no one knows what the relationship between mobile operators and carmakers, for example, might be. They must be left to find their own common ground. Behind this, the role of public policy is to support experimentation, respond to market errors, but only if these do occur. The European Commission is there to coordinate the stakeholders, support them in their transformation and in their experimentation. The H2020 program is a typical example of this, a research project bringing together scientists and industrial actors to come up with solutions.

This article is part of our dossier 5G: the new generation of mobile is already a reality

Silense, Marius Preda, Télécom SudParis

Will we soon be able to control machines with simple gestures?

Projets européens H2020The “Silense” European project launched in May 2017 is aimed at redefining the way we interact with machines. By using ultrasound technology similar to sonar, the researchers and industrialists participating in this collaboration have chosen to focus on 3D motion sensing technology. This technology could allow us to control our smartphone or house with simple gestures, without any physical contact with a tactile surface.

 

Lower the volume on your TV from your couch just by lowering your hand. Close the blinds in your bedroom by simply squeezing your fingers together. Show your car’s GPS the right direction to take by lifting your thumb. It may sound like scenes from a science fiction movie. Yet these scenarios are part of the real-life objectives of the European H2020 project called “Silense”, which stands for (Ultra)Sound Interfaces and Low Energy iNtegrated SEnsors. For a three-year period, this project will bring together 42 academic and industrial partners from eight countries throughout the continent. This consortium—which is particularly large, even for a H2020 project—will work from 2017 to 2020 to develop new human-machine interfaces based on ultrasound.

What we want to do is replace tactile commands by commands the users can make from a distance, by moving their hands, arms or body,” explains Marius Preda, a researcher with Télécom SudParis, one of the project’s partners. To accomplish this, scientists will develop technology that is similar to sonar. An audio source will emit an inaudible sound that fills the air. When the sound wave hits an obstacle, it bounces back and returns to the source. Receivers placed at the same level as the transmitter record the wave travel times and determine the distance between the source and the obstacle. A 3D map of the environment can therefore be created. “It’s the same principle as an ultrasound,” the researcher explains.

In the case of the Silense project, the source will be made up of several transmitters, and there will be many more receivers than for a sonar. The goal is to improve the perception of the obstacles, thus improving the resolution of the 3D image that is produced. This should make it possible to detect smaller variations in shape, and therefore gestures that are more complex than those that are currently possible. “Today we can see if a hand is open or closed, but we cannot distinguish a finger that is up or two fingers that are up and squeezed together”, Marius Preda explains.

Télécom SudParis is leading the project’s software aspect. Its researchers’ mission is to develop image processing algorithms to recognize the gestures users make. By using neural networks to create deep learning, the scientists want to create a dictionary of distinctly different gestures. They will need to be recognizable by the ultrasound sensors regardless of the hand or arm’s position in relation to the sensor.

This is no easy task: the first step is to study differentiating gestures; the ones that cannot confuse the algorithms. The next steps involve reducing noise to improve the detection of shapes, sometimes in a way that is specific to the type of use—a sensor in the wall of a house will not have the same shortcomings as one in a car door. Finally, the researchers will also have to take the uniqueness of each user into account. Two different people will not make a specific sign the same way nor at the same speed.

Our primary challenge is to develop software that can detect the beginning and end of a movement for any user,” explains Marius Preda, while emphasizing how difficult this task is, considering the fluid nature of human gestures: “We do not announce when are going to start or end a gesture. We must therefore succeed in perfectly segmenting the user’s actions into a chain of gestures.

 

Moving towards the human-machine interaction of tomorrow

To meet this challenge, researchers at Télécom SudParis are working very closely with the partners in charge of the hardware aspect. Over the course of the project’s three-year period, the consortium hopes to develop new, smaller generations of sensors. This would make it possible to increase the number of transmitters and receivers on a given surface area, therefore improving the image resolution. This innovation, combined with new image processing algorithms, should significantly increase the catalogue of shapes recognized by ultrasound.

The Silense project is being followed very closely by car and connected object manufacturers. A human-machine interface that uses ultrasound features several advantages. In comparison to the current standard interface—touch—it improves vehicle safety by decreasing the attention required to push a button or tactile screen. In the case of smartphones or smart houses, this will mean greater convenience for consumers.

The ultrasound interface that is proposed here must also be compared with its main competitor: interaction through visual recognition—Kinect cameras, for example. According to Marius Preda, the use of ultrasound removes the lighting problems encountered with video in situations of overexposure (bright light in a car, for example) or underexposure (inside a house at night). In addition, the shape segmentation, for example for hands, is easier using 3D acoustic imaging. “If your hand is the same color as the wall behind you, it will be difficult for the camera to recognize your gesture,” the researcher explains.

Silense therefore has high hopes of creating a new way to interact with machines in our daily lives. By the end of the project, the consortium hopes to establish three demonstrators: one for a smart house, one integrated into a car, and one in a screen like that of a smartphone. If these first proof-of-concept studies prove conclusive, don’t be surprised to see drivers making big gestures in their cars someday!

 

Celtic-Plus Awards, Eureka

Three IMT projects receive Celtic-Plus Awards

Three projects involving IMT schools were featured among the winners at the 2017 Celtic-Plus Awards. The Celtic-Plus program is committed to promoting innovation and research in the areas of telecommunications and information technology. The program is overseen by the European initiative Eureka, which seeks to strengthen the competitiveness of industries as a whole.

 

[box type=”shadow” align=”” class=”” width=””]

SASER (Safe and Secure European Routing) :
Celtic-Plus Innovation Award

The SASER research program brings together operators, equipment manufacturers and research institutes from France, Germany and Finland. The goal of this program is to develop new concepts for strengthening the security of data transport networks in Europe. To achieve this goal, the SASER project is working on new architectures, specifically by imagining networks that integrate, or are distributed, via the latest technological advances in cloud computing and virtualization. Télécom ParisTech, IMT Atlantique and Télécom SudParis are partners in this project led by the Nokia group.

[/box]

[box type=”shadow” align=”” class=”” width=””]

NOTTS (Next Generation Over-The-Top Multimedia Services) :
Excellence Award for Services and Applications

NOTTS seeks to resolve the new problems created by over-the-top multimedia services. These services, such as Netflix and Spotify, are not controlled by the operators and affect the internet network. The project proposes to study the technical problems facing the operators, and seek solutions for creating new business models that would be agreeable for all parties involved. It brings together public and private partners from 6 different countries: Spain, Portugal, Finland, Sweden, Poland, and France, where Télécom SudParis is based.

[/box]

[box type=”shadow” align=”” class=”” width=””]

H2B2VS (HEVC Hybrid Broadcast Broadband Video Services) :
Excellence Award for Multimedia

New video formats such as ultra-HD and 3D test the limits of broadcasting networks and high-speed networks. Both networks have limited bandwidths. The H2B2VS project aims to resolve this bandwidth problem by combining the two networks. The broadcasting network would transmit the main information, while the high-speed network would transmit additional information. H2B2VS includes industrialists and public research institutes in France, Spain, Turkey, Finland and Switzerland. Télécom ParisTech is part of this consortium.

[/box]

Two IMT projects also received awards at the 2016 Celtic-Plus Awards.

AutoMat, Télécom ParisTech

With AutoMat, Europe hopes to adopt a marketplace for data from connected vehicles

Projets européens H2020Data collected by communicating vehicles represent a goldmine for providers of new services. But in order to buy and sell this data, economic players need a dedicated marketplace. Since 2015, the AutoMat H2020 project has been developing such an exchange platform. To achieve this mission by 2018 — the end date for the project— a viable business model will have to be defined. Researchers at Télécom Paristech, a partner in the project, are currently tackling this task.

 

Four wheels, an engine, probably a battery, and most of all, an enormous quantity of data generated and transmitted every second. There is little doubt that in the future, which is closer than we may think, cars will be intelligent and communicating. And beyond recording driving parameters to facilitate maintenance, or transmitting information to improve road safety, the data acquired by our vehicles will represent a market opportunity for third-party services.

But in order to create these new services, a secure platform for selling and buying data must still be developed, with sufficient volume to be attractive. This is the objective that the AutoMat  project — began in April 2015 and funded by the H2020 European research programme — is trying to achieve by developing a marketplace prototype.

The list of project members includes two service providers: MeteoGroup and Here, companies which specialize in weather forecasts and mapping respectively. For these two stakeholders, data from cars will only be valuable if it comes from many different manufacturers. For MeteoGroup, the purpose of using vehicles as weather sensors is to have access to real-time information about temperatures or weather conditions nearly anywhere in Europe. But a single brand would not have a sufficient number of cars to be able to provide this much information: therefore data from several manufacturers must be aggregated. This is no easy task since, for historical reasons, each one has its own unique format for storing data.

 

AutoMat, Télécom ParisTech, communicating vehicles, connected cars

Data from communicating cars could, for example, optimize meteorological measurements by using vehicles as sensors.

 

To simplify this task without giving anyone an advantage, the technical university of Dortmund is participating in the project by defining a new model with a standard data format agreed upon by all parties. This, however, requires automobile manufacturers to change their processes in order to integrate a data formatting phase. But the cost of this adaptation is marginal compared to the great potential value of their data combined with that of their competitors. The Renault and Volkswagen groups, as well as the Fiat research centre are partners in the AutoMat project in order to identify how to tap into the underlying economic potential.

 

What sort of business model?

In reality, it is less difficult to convince manufacturers than it is to find a business model for the marketplace prototype. This is why Télécom ParisTech’s Economics and Social Sciences Department (SES) is contributing to the AutoMat project. Giulia Marcocchia, a PhD student in Management Sciences who is working on the project, describes different aspects which must be taken into consideration:

“We are currently carrying out experiments on user cases, but the required business model is unique so it takes time to define. Up until now, manufacturers have used data transmitted by cars to optimize maintenance or reduce life cycle costs. In other sectors, there are marketplaces for selling data by packets or on a subscription basis to users clearly identified as either intermediary companies or final consumers.
But in the case of a marketplace for aggregated data from cars, the users are not as clearly defined: economic players interested in this data will only be discovered upon the definition of the platform and the ecosystem taking shape around connected cars.”

For researchers in the SES department, this is the whole challenge: studying how a new market is created. To do so, they have adopted an effectual approach. Valérie Fernandez, an innovation management researcher and director of the department, describes this method as one in which “the business model tool is not used to analyze a market, but rather as a tool to foster dialogue between stakeholders in different sectors of activity, with the aim of creating a market which does not currently exist.”

The approach focuses on users: what do they expect from the product and how will they use it? This concerns automobile manufacturers who supply the platform with the data they collect as much as service providers who buy this data. “We have a genuine anthropological perspective for studying these users because they are complex and multifaceted,” says Valérie Fernandez. “Manufacturers become producers of data but also potential users, which is a new role for them in a two-sided market logic.”

The same is true for drivers, who are potential final users of the new services generated and may also have ownership rights for data acquired by vehicles they drive. From a legal standpoint nothing has been determined yet and the issue is currently being debated at the European level. But regardless of the outcome, “The marketplace established by AutoMat will incorporate questions about drivers’ ownership of data,” assures Giulia Marcocchia.

The project runs until March 2018. In its final year, different use cases should make it possible to define a business model that responds to questions relating to uses by different users. Should it fulfill its objective, AutoMat will represent a useful tool for developing intelligent vehicles in Europe.

[divider style=”normal” top=”20″ bottom=”5″] 

Ensuring a secure, independent marketplace

In addition to the partners mentioned in the article above, the AutoMat project brings together stakeholders responsible for securing the marketplace and handling its governance. Atos is in charge of the platform, from its design to data analysis in order to help identify the market’s potential. Two partners, ERPC and Trialog, are also involved in key aspects of developing the marketplace: cyber-security and confidentiality. Software systems engineering support for the various parties involved is ensured by ATB, a non-profit research organization.

[divider style=”normal” top=”20″ bottom=”20″] 

oispg, Pierre Simai, IMT

OISPG: Promoting open innovation in Europe

Projets européens H2020On January 1st, 2017, Pierre Simay was appointed as the new OISPG Rapporteur. This group of experts from the European Commission supports and promotes open innovation practices, particularly in the context of the Horizon 2020 program.

 

Today’s companies can no longer innovate alone. They exist in innovation ecosystems in which the collaborative model is prioritized,” explains Pierre Simay, Coordinator for International Relations at IMT. Open innovation is a way of viewing research and innovation strategy as being open to external contributions through collaboration with third parties.

The Horizon 2020 framework program pools all the European Union funding for research and innovation. The program receives funding of nearly €80 billion for a 7-year period (2014-2020). Each year, calls for tenders are published to finance research and innovation projects (individual and collaborative projects). The European Commission services in charge of Horizon 2020 have established external advisory groups to advise them in the preparation of the calls for proposals. Since 2010, IMT has been actively involved in the expert group on open innovation, OISPG – Open Innovation Strategy and Policy Group. Pierre Simay, the recently appointed OISPG Rapporteur, presents this group and the role played by IMT within the group.

 

What is the OISPG?

Pierre Simay: OISPG is a DG CONNECT expert group, the European Commission’s Directorate General for Information and Communication Technology. The open innovation phenomenon has increased over the past few years, with the appearance of more collaborative and open models. These models are based, for example, on user participation in research projects and the development of living labs in Europe (EnoLL network). I should also mention the new research and innovation ecosystems that have emerged around platforms and infrastructures. This is the case for the European “Fiware” initiative which, by making copyright-free software building block platforms available to developers and SMEs, seeks to facilitate the creation and roll-out of the internet applications of the future in what are referred to as the vertical markets (healthcare, energy, transportation, etc.).

Open innovation refers to several concepts and practices – joint laboratories, collaborative projects, crowdsourcing, user innovation, entrepreneurship, hackathons, technological innovation platforms, and Fablabs which are still relatively new and require increasingly cross-sectoral collaborative efforts. Take farms of the future, for example, with precision agriculture that requires cooperation between farms and companies in the ICT sector (robotics, drones, satellite imagery, sensors, big data…) for the deployment and integration of agronomic information systems. OISPG was created in response to these kinds of challenges.

Our mission focuses on two main areas. The first is to advise the major decision-makers of the European Commission on open innovation matters. The second is to encourage major private and public European stakeholders to adopt open innovation, particularly through the broad dissemination of the practical examples and best practices featured in the OISPG reports and publications. To accomplish its mission, OISPG has been organized around a panel of 20 European experts from the industry (INTEL, Atos Origin, CGI, Nokia, Mastercard…), the academic world (Amsterdam University of Applied Sciences, ESADE, IMT…), and the institutional sector (DG CONNECT, the European Committee of the Regions, Enoll, ELIG…).

 

What does your role within this group involve?

PS: My role is to promote the group’s work and maintain links with the European Commission experts who question us about the current issues related to the Horizon 2020 program and who seek an external perspective on open innovation and its practices. Examples include policy that is being established in the area of digital innovation hubs, and reflections on blockchain technology and the collaborative issues it involves. OISPG must also propose initiatives to improve the definition of collaborative approaches and the assessment criteria used by the Commission in financing Horizon 2020 projects. In Europe, we still suffer from cumbersome and rigid administrative procedures, which are not always compatible with the nature of innovation and its current demands: speed and flexibility.

My role also includes supporting DG CONNECT in organizing its annual conference on open innovation (OI 2.0). This year, it will be held from June 13 to 14 in Cluj-Napoca, Romania. During the conference, political decision-makers, professionals, theorists and practitioners will be able to exchange and work together on the role and impacts of open innovation.

 

What issues/opportunities exist for IMT as a member of this group?

PS: IMT is actively involved in open innovation, with major programs such as those of the Fondation Télécom (FIRST program), our Carnot institutes and our experimentation platforms (for example, the TeraLab for Big Data). Our participation in OISPG positions us at the heart of European collaborative innovation issues, enables us to meet with political decision-makers and numerous European research and innovation stakeholders to create partnerships and projects. This also allows us to promote our expertise internationally.