Posts

digital sovereignty

Sovereignty and digital technology: controlling our own destiny

Annie Blandin-Obernesser, IMT Atlantique – Institut Mines-Télécom

Facebook has an Oversight Board, a kind of “Supreme Court” that rules on content moderation disputes. Digital giants like Google are investing in the submarine telecommunications cable market. France has had to back pedal after choosing Microsoft to host the Health Data Hub.

These are just a few examples demonstrating that the way in which digital technology is developing poses a threat not only to the European Union and France’s economic independence and cultural identity. Sovereignty itself is being questioned, threatened by the digital world, but also finding its own form of expression there.

What is most striking is that major non-European digital platforms are appropriating aspects of sovereignty: a transnational territory, i.e. their market and site where they pronounce norms, a population of internet users, a language, virtual currencies, optimized taxation, and the power to issue rules and regulations. The aspect that is unique to the digital context is based on the production and use of data and control over information access. This represents a form of competition with countries or the EU.

Sovereignty in all its forms being questioned

The concept of digital sovereignty has matured since it was formalized around ten years ago as an objective to “control our own destinies online”. The current context is different to when it emerged. Now, it is sovereignty in general that is seeing a resurgence of interest, or even souverainism (an approach that prioritizes protecting sovereignty).

This topic has never been so politicized. Public debate is structured around themes such as state sovereignty regarding the EU and EU law, economic independence, or even strategic autonomy with regards to the world, citizenship and democracy.

In reality, digital sovereignty is built on the basis of digital regulation, controlling its material elements and creating a democratic space. It is necessary to take real action, or else risk seeing digital sovereignty fall hostage to overly theoretical debates. This means there are many initiatives that claim to be an integral part of sovereignty.

Regulation serving digital sovereignty

The legal framework of the online world is based on values that shape Europe’s path, specifically, protecting personal data and privacy, and promoting general interest, for example in data governance.

The text that best represents the European approach is the General Data Protection Regulation (GDPR), adopted in 2016, which aims to allow citizens to control their own data, similar to a form of individual sovereignty. This regulation is often presented as a success and a model to be followed, even if it has to be put in perspective.

New European digital legislation for 2022

The current situation is marked by proposed new digital legislation with two regulations, to be adopted in 2022.

It aims to regulate platforms that connect service providers and users or offer services to rank or optimize content, goods or services offered or uploaded online by third parties: Google, Meta (Facebook), Apple, Amazon, and many others besides.

The question of sovereignty is also present in this reform, as shown by the debate around the need to focus on GAFAM (Google, Amazon, Facebook, Apple and Microsoft).

On the one hand, the Digital Markets Act (the forthcoming European legislation) includes strengthened obligations for “gatekeeper” platforms, which intermediate and end-users rely on. This affects GAFAM, even if it may be other companies that are concerned – like Booking.com or Airbnb. It all depends on what comes out of the current discussions.

And on the other hand, the Digital Services Act is a regulation for digital services that will structure the responsibility of platforms, specifically in terms of the illegal content that they may contain.

Online space, site of confrontation

Having legal regulations is not enough.

“The United States have GAFA (Google, Amazon, Facebook and Apple), China has BATX (Baidu, Alibaba, Tencent and Xiaomi). And in Europe, we have the GDPR. It is time to no longer depend solely on American or Chinese solutions!” declared French President Emmanuel Macron during an interview on December 8 2020.

Interview between Emmanuel Macron and Niklas Zennström (CEO of Atomico). Source: Atomico on Medium.

The international space is a site of confrontation between different kinds of sovereignty. Every individual wants to truly control their own digital destiny, but we have to reckon with the ambition of countries that demand the general right to control or monitor their online space, such as the United States or China.

The EU and/or its member states, such as France, must therefore take action and promote sovereign solutions, or else risk becoming a “digital colony”.

Controlling infrastructure and strategic resources

With all the focus on intermediary services, there is not enough emphasis placed on the industrial dimension of this topic.

And yet, the most important challenge resides in controlling vital infrastructure and telecommunications networks. The question of submarine cables, used to transfer 98% of the world’s digital data, receives far less media attention than the issue of 5G devices and Huawei’s resistance. However, it demonstrates the need to promote our cable industry in the face of the hegemony of foreign companies and the arrival of giants such as Google or Facebook in the sector.

The adjective “sovereign” is also applied to other strategic resources. For example, the EU wants to secure its supply of semi-conductors, as currently, it depends on Asia significantly. This is the purpose of the European Chips Act, which aims to create a European ecosystem for these materials. For Ursula von der Leyen, “it is not only a question of competitiveness, but also of digital sovereignty.”

There is also the question of a “sovereign” cloud, which has been difficult to implement. There are many conditions required to establish sovereignty, including the territorialization of the cloud, trust and data protection. But with this objective in mind, France has created the label SecNumCloud and announced substantial funding.

Additionally, the adjective “sovereign” is used to describe certain kinds of data, for which states should not depend on anyone for their access, such as geographic data. In a general way, a consensus has been reached around the need to control data and access to information, particularly in areas where the challenge of sovereignty is greatest, such as health, agriculture, food and the environment. Development of artificial intelligence is closely connected to the status of this data.

Time for alternatives

Does all that mean facilitating the emergence of major European or national actors and/or strategic actors, start-ups and SMEs? Certainly, such actors will still need to show good intentions, compared to those that shamelessly exploit personal data, for example.

A pure alternative is difficult to bring about. This is why partnerships develop, although they are still highly criticized, to offer cloud hosting for example, like the collaboration between Thales and OVHcloud in October 2021.

On the other hand, there is reason to hope. Open-source software is a good example of a credible alternative to American private technology firms. It needs to be better promoted, particularly in France.

Lastly, cybersecurity and cyberdefense are critical issues for sovereignty. The situation is critical, with attacks coming from Russia and China in particular. Cybersecurity is one of the major sectors in which France is greatly investing at present and positioning itself as a leader.

Sovereignty of the people

To conclude, it should be noted that challenges relating to digital sovereignty are present in all human activities. One of the major revelations occurred in 2005, in the area of culture, when Jean-Noël Jeanneney observed that Google had defied Europe by creating Google Books and digitizing the continent’s cultural heritage.

The recent period reconnects with this vision, with cultural and democratic issues clearly essential in this time of online misinformation and its multitude of negative consequences, particularly for elections. This means placing citizens at the center of mechanisms and democratizing the digital world, by freeing individuals from the clutches of internet giants, whose control is not limited to economics and sovereignty. The fabric of major platforms is woven from the human cognitive system, attention and freedom. Which means that, in this case, the sovereignty of the people is synonymous with resistance.

Annie Blandin-Obernesser, Law professor, IMT Atlantique – Institut Mines-Télécom

This article was republished from The Conversation under the Creative Commons license. Read the original article here (in French).

privacy, data protection regulation

Privacy as a business model

The original version of this article (in French) was published in quarterly newletter no 22 (October 2021) from the Chair “Values and Policies of Personal Information”.

The usual approach

The GDPR is the most visible text on this topic. It is not the oldest, but it is at the forefront for a simple reason: it includes huge sanctions (up to 4% of consolidated international group turnover for companies). Consequently, this regulation is often treated as a threat. We seek to protect ourselves from legal risk.

The approach is always the same: list all data processed, then find a legal framework that allows you to keep to the same old habits. This is what produces the long, dry texts that the end-user is asked to agree to with a click, most often without reading. And abracadabra, a legal magic trick – you’ve got the user’s consent, you can continue as before.

This way of doing things poses various problems.

  1. It implies that privacy is a costly position, a risk, that it is undesirable. Communication around the topic can create a disastrous impression. The message on screen says one thing (in general, “we value your privacy”), while reality says the opposite (“sign the 73-page-long contract now, without reading it”). The user knows very well when signing that everyone is lying. No, they haven’t read it. And no, nobody is respecting their privacy. It is a phony contract signed between liars.
  2. The user is positioned as an enemy. Someone who you need to get to sign a document, more or less forced, in which they undertake not to sue, is an enemy. It creates a relationship of distrust with the user.

But we could see these texts with a completely different perspective if we just decided to change our point of view.

Placing the user at the center

The first approach means satisfying the legal team (avoiding lawsuits) and the IT department (a few banners and buttons to add, but in reality nothing changes). What about trying to satisfy the end user?

Let us consider that privacy is desirable, preferable. Imagine that we are there to serve users, rather than trying to protect ourselves from them.

We are providing a service to users, and in so doing, we process their personal data. Not everything that is available to us, but only what is needed for said service. Needed to satisfy the user, not to satisfy the service provider.

And since we have data about the user, we may as well show it to them, and allow them to take action. By displaying things in an understandable way, we create a phenomenon of trust. By giving power back to the user (to delete and correct, for example) we give them a more comfortable position.

You can guess what is coming: by placing the user back in the center, we fall naturally and logically back in line with GDPR obligations.

And yet, this part of the legislation is far too often misunderstood. The GDPR allows for a certain number of cases under which it is authorized to manipulate personal user data. Firstly, upon their request, to provide the service that is being sought. Secondly, for a whole range of legal obligations. Thirdly, for a few well-defined exceptions (research, police, law, absolute emergency, etc.). And finally, if there really is no good reason, you have to ask explicit consent from the user.

If we are asking the user’s consent, it is because we are in the process of damaging their privacy in a way that is not serving them. Consent is not the first condition of all personal data processing. On the contrary, it is the last. If there really is no legitimate motive, permission must be asked before processing the data.

Once this point has been raised, the key objection remains: the entire economic model of the digital world involves pillaging people’s private lives, to model and profile them, sell targeted advertising for as much money as possible, and predict user behavior. In short, if you want to exist online, you have to follow the American model.

Protectionism

Let us try another approach. Consider that the GDPR is a text that protects Europeans, imposing our values (like respect of privacy) in a world that ignores them. The legislation tells us that companies that do not respect these values are not welcome in the European Single Market. From this point of view, the GDPR has a clear protectionist effect: European companies respect the GDPR, while others do not. A European digital ecosystem can come into being with protected access to the most profitable market in the world.

From this perspective, privacy is seen as a positive thing for both companies and users. A bit like how a restaurant owner handles hygiene standards: a meticulous, serious approach is needed, but it is important to do so to protect customers, and it is in their interest to have an exemplary reputation. Furthermore, it is better if it is mandatory, so that the bottom-feeders who don’t respect the most basic rules disappear from the market.

And here, it is exactly the same mechanism. Consider that users are allies and put them back in the center of the game. If we have data on them, we may as well tell them, show them, and so on.

Here, a key element enters in play. Because, as long as Europe’s digital industry remains stuck on the American model and rejects the GDPR, it is in the opposite position. The business world does not like to comply with standards when it does not understand their utility. It debates with inspecting authorities to request softer rules, delays, adjustments, exceptions, etc. And so, it asks that the weapon created to protect European companies be disarmed and left on standby.

It is a Nash equilibrium. It is in the interest of all European companies to use the GDPR’s protectionist aspect to their advantage, but each believes that if they are the first, then they will lose out to those who do not respect the standards. Normally, to get out of this kind of toxic equilibrium, it takes a market regulation initiative. Ideally, a concerted effort to stimulate movement in the right direction. For now, the closest thing to a regulatory initiative are the increasingly high sanctions being dealt out all over Europe.

Standing out from the crowd

Of course, the digital reality of today is often not that simple. Data travels, changes hands, collected in one place but exploited in another. To successfully show users the processing of their data, often many things need to be reworked. The process needs to be focused on the end user rather than on the activity.

And even so, there are some cases where this kind of transparent approach is impossible. For example, the data that is collected to be used for targeted ad profiling. This data is nearly always transmitted to third parties, to be used in ways that are not in direct connection with the service that the user subscribed to. This is the typical use-case for which we try to obtain user consent (without which the processing is illegal) but where it is clear that transparency is impossible and informed consent is unlikely.

Two major categories are taking shape. The first includes digital services that can place the user at the center, and present themselves as allies, demonstrating a very high level of transparency. And the second represents digital services that are incapable of presenting themselves as allies.

So clearly, a company’s position on the question of privacy can be a positive feature that sets them apart. By aiming to defend user interests, we improve compliance with regulation, instead of trying to comply without understanding. We form an alliance with the user. And that is precisely what changes everything.

Benjamin Bayart

Data collection protection, GDPR impact

GDPR: Impact on data collection at the international level

The European data protection regulation (GDPR), introduced in 2018, set limits on the use of trackers that collect personal data. This data is used to target advertising to users. Vincent Lefrère, associate professor in digital economy at Institut Mines-Télécom Business School, worked with Alessandro Acquisti from Carnegie Mellon University to study the impact of the GDPR on tracking users in Europe and internationally.

What was your strategy for analyzing the impact of GDPR on tracking users in different countries?

Vincent Lefrère: We conducted our research on online media such as Le Monde in France or the New York Times in the United States. We looked at whether the introduction of the GDPR has had an impact on the extent to which users are tracked and the amount of personal data collected.

How were you able to carry out these analyses at the international level?

VL: The work was carried out in partnership with researchers at Carnegie Mellon University in the United States, in particular Alessandro Acquisti, who is one of the world’s specialists in personal digital data. We worked together to devise the experimental design and create a wider partnership with researchers at other American universities, in particular the Minnesota Carlson School of Management and Cornell University in New York.

How does the GDPR limit the collection of personal data?

VL: One of the fundamental principles of the GDPR is consent. This makes it possible to require websites that collect data to obtain users’ consent  before tracking them. In our study, we never gave our consent or explicitly refused the collection of data. That way, we could observe how a website behaves in relation to a neutral user. Moreover, one of the important features of GDPR is that it applies to all parties who wish to process data pertaining to European citizens. As such, the New York Times must comply with the GDPR when a website visitor is European. 

How did you compare the impact of the GDPR on different media?

VL: We logged into different media sites with IP addresses from different countries, in particular with French and American IP addresses.

We observed that American websites limit tracking more than European websites, and therefore better comply with the GDPR, but only when we were using a European IP address.  It would therefore appear that the GDPR has been more dissuasive on American websites for these users. However, the American websites increased the tracking of American users, for whom the GDPR does not apply.  One hypothesis is that this increase is used to offset the loss of data from European users.

How have online media adapted to the GDPR?

VL: We were able to observe a number of effects. First of all, online media websites have not really played along. Since mechanisms of consent are somewhat vague,  the formats developed in recent years have often encouraged users to accept personal data collection rather than reject it. There are reasons for this: data collection has become crucial to the business model of these websites, but little has been done to offset the loss of data resulting from the introduction of the GDPR, so it is understandable that they have stretched the limits of the law in order to continue offering high quality content for free. With the recent update by the French National Commission on Information Technology and Liberties (CNIL) to fight against this, consent mechanisms will become clearer and more standardized.  

In addition, the GDPR has limited tracking of users by third parties, and replaced it with tracking by first parties. Before, when a user logged into a news site, other companies such as Google, Amazon or Facebook could collect their data directly on the website. Now, the website itself tracks data, which may then be shared with third parties.

Following the introduction of the GDPR, the market share of Google’s online advertising service increased in Europe, since Google is one of the few companies who could pay the quota for the regulation, meaning it could pay the price of ensuring compliance. This is an unintended, perverse  consequence: smaller competitors have disappeared and there has been a concentration of ownership of data by Google.  

Has the GDPR had an effect on the content produced by the media?

VL: We measured the quantity and quality of content produced by the media. Quantity simply reflects the number of posts. The quality is assessed by the user engagement rate, meaning the number of comments or likes, as well as the number of pages viewed each time a user visits the website.

In the theoretical framework for our research, online media websites use targeted advertising to generate revenue. Since the GDPR makes access to data more difficult, it could decrease websites’ financing capacity and therefore lead to a reduction in content quality or quantity. By verifying these aspects, we can gain insights into the role of personal data and targeted advertising in the business model for this system.   

Our preliminary results show that after the introduction of the GDPR, the quantity of content produced by European websites was not affected, and the amount of engagement remained stable. However, European users reduced the amount of time they spent on European websites in comparison to American websites. This could be due to the the fact that certain American websites may have prohibited access to European users, or that American websites covered European topics less since attracting European users had become less profitable. These are hypotheses that we are currently discussing.

We are assessing these possible explanations by analyzing data about the newspapers’ business models, in order to estimate how important personal data and targeted advertising are to these business models.  

By Antonin Counillon

Eclairer boites noires, algorithms

Shedding some light on black box algorithms

In recent decades, algorithms have become increasingly complex, particularly through the introduction of deep learning architectures. This has gone hand in hand with increasing difficulty in explaining their internal functioning, which has become an important issue, both legally and socially. Winston Maxwell, legal researcher, and Florence d’Alché-Buc, researcher in machine learning, who both work for Télécom Paris, describe the current challenges involved in the explainability of algorithms.

What skills are required to tackle the problem of algorithm explainability?

Winston Maxwell: In order to know how to explain algorithms, we must draw on different disciplines. Our multi-disciplinary team, AI Operational Ethics, focuses not only on mathematical, statistical and computational aspects, but also on sociological, economic and legal aspects. For example, we are working on an explainability system for image recognition algorithms used, among other things, for facial recognition in airports. Our work therefore encompasses these different disciplines.

Why are algorithms often difficult to understand?

Florence d’Alché-Buc: Initially, artificial intelligence used mainly symbolic approaches, i.e., it simulated the logic of human reasoning. Logical rules, called expert systems, allowed artificial intelligence to make a decision by exploiting observed facts. This symbolic framework made AI more easily explainable. Since the early 1990s, AI has increasingly relied on statistical learning, such as decision trees or neural networks, as these structures allow for better performance, learning flexibility and robustness.

This type of learning is based on statistical regularities and it is the machine that establishes the rules which allow their exploitation. The human provides input functions and an expected output, and the rest is determined by the machine. A neural network is a composition of functions. Even if we can understand the functions that compose it, their accumulation quickly becomes complex. So a black box is then created, in which it is difficult to know what the machine is calculating.

How can artificial intelligence be made more explainable?

FAB: Current research focuses on two main approaches. There is explainability by design where, for any new constitution of an algorithm, explanatory output functions are implemented which make it possible to progressively describe the steps carried out by the neural network. However, this is costly and impacts the performance of the algorithm, which is why it is not yet very widespread. In general, and this is the other approach, when an existing algorithm needs to be explained, it is an a posteriori approach that is taken, i.e., after an AI has established its calculation functions, we will try to dissect the different stages of its reasoning. For this there are several methods, which generally seek to break the entire complex model down into a set of local models that are less complicated to deal with individually.

Why do algorithms need to be explained?

WM: There are two main reasons why the law stipulates that there is a need for the explainability of algorithms. Firstly, individuals have the right to understand and to challenge an algorithmic decision. Secondly, it must be guaranteed that a supervisory institution such as the  French Data Protection Authority (CNIL), or a court, can understand the operation of the algorithm, both as a whole and in a particular case, for example to make sure that there is no racial discrimination. There is therefore an individual aspect and an institutional aspect.

Does the format of the explanations need to be adapted to each case?

WM: The formats depend on the entity to which it needs to be explained: for example, some formats will be adapted to regulators such as the CNIL, others to experts and yet others to citizens. In 2015, an experimental service to deploy algorithms that detect possible terrorist activities in case of serious threats was introduced. For this to be properly regulated, an external control of the results must be easy to carry out, and therefore the algorithm must be sufficiently transparent and explainable.

Are there any particular difficulties in providing appropriate explanations?

WM: There are several things to bear in mind. For example, information fatigue: when the same explanation is provided systematically, humans will tend to ignore it. It is therefore important to use varying formats when presenting information. Studies have also shown that humans tend to follow a decision given by an algorithm without questioning it. This can be explained in particular by the fact that humans will consider from the outset that the algorithm is statistically wrong less often than themselves. This is what we call automation bias. This is why we want to provide explanations that allow the human agent to understand and take into consideration the context and the limits of algorithms. It is a real challenge to use algorithms to make humans more informed in their decisions, and not the other way around. Algorithms should be a decision aid, not a substitute for human beings.

What are the obstacles associated with the explainability of AI?

FAB: One aspect to be considered when we want to explain an algorithm is cyber security. We must be wary of the potential exploitation of explanations by hackers. There is therefore a triple balance to be found in the development of algorithms: performance, explainability and security.

Is this also an issue of industrial property protection?

WM: Yes, there is also the aspect of protecting business secrets: some developers may be reluctant to discuss their algorithms for fear of being copied. Another counterpart to this is the manipulation of scores: if individuals understand how a ranking algorithm, such as Google’s, works, then it would be possible for them to manipulate their position in the ranking. Manipulation is an important issue not only for search engines, but also for fraud or cyber-attack detection algorithms.

How do you think AI should evolve?

FAB: There are many issues associated with AI. In the coming decades, we will have to move away from the single objective of algorithm performance to multiple additional objectives such as explainability, but also equitability and reliability. All of these objectives will redefine machine learning. Algorithms have spread rapidly and have enormous effects on the evolution of society, but they are very rarely accompanied by instructions for their use. A set of adapted explanations must go hand in hand with their implementation in order to be able to control their place in society.

By Antonin Counillon

Also read on I’MTech: Restricting algorithms to limit their powers of discrimination

 

Facial recognition: what legal protection exists?

Over the past decade, the use of facial recognition has developed rapidly for both security and convenience purposes. This biometrics-based technology is used for everything from video surveillance to border controls and unlocking digital devices. This type of data is highly sensitive and is subject to specific legal framework. Claire Levallois-Barth, a legal researcher at Télécom Paris and coordinator of the Values and Policies of Personal Information Chair at IMT provides the context for protecting this data.

What laws govern the use of biometric data?

Claire Levallois-Barth: Biometric data “for the purpose of uniquely identifying a natural person” is part of a specific category defined by two texts adopted by the 27 Member States of the European Union in April 2016, the General Regulation Data Protection Regulation (GDPR) and the Directive for Police and Criminal Justice Authorities. This category of data is considered highly sensitive.

The GDPR applies to all processing of personal data in both private and public sectors.

The Directive for Police and Criminal Justice Authorities pertains to processing carried out for purposes of prevention, detection, investigation, and prosecution of criminal offences or the execution of criminal penalties by competent authorities (judicial authorities, police, other law enforcement authorities). It specifies that biometric data must only be used in cases of absolute necessity and must be subject to provision of appropriate guarantees for the rights and freedoms of the data subject. This type of processing may only be carried out in three cases: when authorized by Union law or Member State law, when related to data manifestly made public by the data subject, or to protect the vital interests of the data subject or another person.

What principles has the GDPR established?

CLB: The basic principle is that collecting and processing biometric data is prohibited due to significant risks of violating basic rights and freedoms, including the freedom to come and go anonymously. There are, however, a series of exceptions. The processing must fall under one of these exceptions (express consent from the data subject, protection of his or her vital interests, conducted for reasons of substantial public interest) and respect all of the obligations established by the GDPR. The key principle is that the use of biometric data must be strictly necessary and proportionate to the objective pursued. In certain cases, it is therefore necessary to obtain the individual’s consent, even when the facial recognition system is being used on an experimental basis. There is also the minimization principle, which systematically asks, “is there any less intrusive way of achieving the same goal?” In any case, organizations must carry out an impact assessment on people’s rights and freedoms.

What do the principles of proportionality and minimization look like in practice?

CLB: One example is when the Provence-Alpes-Côte d’Azur region wanted to experiment with facial recognition at two high schools in Nice and Marseille. The CNIL ruled that the system involving students, most of whom were minors, for the sole purpose of streamlining and securing access, was not proportionate to these purposes. Hiring more guards or implementing a badge system would offer a sufficient solution in this case.

Which uses of facial recognition have the greatest legal constraints?

CLB: Facial recognition can be used for various purposes. The purpose of authentication is to verify whether someone is who he or she claims to be. It is implemented in technology for airport security and used to unlock your smartphone. These types of applications do not pose many legal problems. The user is generally aware of the data processing that occurs, and the data is usually processed locally, by a card for example.

On the other hand, identification—which is used to identify one person within a group—requires the creation of a database that catalogs individuals. The size of this database depends on the specific purposes. However, there is a general tendency towards increasing the number of individuals. For example, identification can be used to find wanted or missing persons, or to recognize friends on a social network. It requires increased vigilance due to the danger of becoming extremely intrusive.

Facial recognition has finally provided a means of individualizing a person. There is no need to identify the individual–the goal is “simply” to follow people’s movements through the store to assess their customer journey or analyze their emotions in response to an advertisement or while waiting at the checkout. The main argument advertisers use to justify this practice is that the data is quickly anonymized, and no record is kept of the person’s face. Here, as in the case of identification, facial recognition usually occurs without the person’s knowledge.

How can we make sure that data is also protected internationally?

CLB: The GDPR applies in the 27 Member States of the European Union which have agreed on common rules. Data can, however, be collected by non-European companies. This is the case for photos of European citizens collected from social networks and news sites. This is one of the typical activities of the company Clearview AI, which has already established a private database of 3 billion photos.

The GDPR lays down a specific rule for personal data leaving European Union territory: it may only be exported to a country ensuring a level of protection deemed comparable to that of the European Union. Yet few countries meet this condition. A first option is therefore for the data importer and exporter to enter into a contract or adopt binding corporate rules. The other option, for data stored on servers on U.S. territory, was to build on the Privacy Shield agreement concluded between the Federal Trade Commission (FTC) and the European Commission. However, this agreement was invalidated by the Court of Justice of the European Union in the summer of 2020. We are currently in the midst of a legal and political battle. And the battle is complicated since data becomes much more difficult to control once it is exported. This explains why certain stakeholders, such as Thierry Breton (the current European Commissioner for Internal Market), have emphasized the importance of fighting to ensure European data is stored and processed in Europe, on Europe’s own terms.

Despite the risks and ethical issues involved, is facial recognition sometimes seen as a solution for security problems?

CLB: It can in fact be a great help when implemented in a way that respects our fundamental values. It depends on the specific terms. For example, if law enforcement officers know that a protest will be held, potentially involving armed individuals, at a specific time and place, facial recognition can prove very useful at that specific time and place. However, it is a completely different scenario if it is used constantly for an entire region and entire population in order to prevent shoplifting.

This summer, the London Court of Appeal ruled that an automatic facial recognition system used by Welsh police was unlawful. The ruling emphasized a lack of clear guidance on who could be monitored and accused law enforcement officers of failing to sufficiently verify whether the software used had any racist or sexist bias. Technological solutionism, a school of thought emphasizing new technology’s capacity to solve the world’s major problems, has its limitations.

Is there a real risk of this technology being misused in our society?

CLB: A key question we should ask is whether there is a gradual shift underway, caused by an accumulation of technology deployed at every turn. We know that video-surveillance cameras are installed in public roads, yet we do not know about additional features that are gradually added, such as facial recognition or behavioral recognition.  The European Convention of Human Rights, GDPR, the Directive for Police and Criminal Justice Authorities, and the CNIL provide safeguards in this area.

However, they provide a legal response to an essentially political problem. We must prevent the accumulation of several types of intrusive technologies that come without prior reflection on the overall result, without taking a step back to consider the consequences. What kind of society do we want to build together? Especially within the context of a health and economic crisis. The debate on our society remains open, as do the means of implementation.

Interview by Antonin Counillon