Posts

privacy, data protection regulation

Privacy as a business model

The original version of this article (in French) was published in quarterly newletter no 22 (October 2021) from the Chair “Values and Policies of Personal Information”.

The usual approach

The GDPR is the most visible text on this topic. It is not the oldest, but it is at the forefront for a simple reason: it includes huge sanctions (up to 4% of consolidated international group turnover for companies). Consequently, this regulation is often treated as a threat. We seek to protect ourselves from legal risk.

The approach is always the same: list all data processed, then find a legal framework that allows you to keep to the same old habits. This is what produces the long, dry texts that the end-user is asked to agree to with a click, most often without reading. And abracadabra, a legal magic trick – you’ve got the user’s consent, you can continue as before.

This way of doing things poses various problems.

  1. It implies that privacy is a costly position, a risk, that it is undesirable. Communication around the topic can create a disastrous impression. The message on screen says one thing (in general, “we value your privacy”), while reality says the opposite (“sign the 73-page-long contract now, without reading it”). The user knows very well when signing that everyone is lying. No, they haven’t read it. And no, nobody is respecting their privacy. It is a phony contract signed between liars.
  2. The user is positioned as an enemy. Someone who you need to get to sign a document, more or less forced, in which they undertake not to sue, is an enemy. It creates a relationship of distrust with the user.

But we could see these texts with a completely different perspective if we just decided to change our point of view.

Placing the user at the center

The first approach means satisfying the legal team (avoiding lawsuits) and the IT department (a few banners and buttons to add, but in reality nothing changes). What about trying to satisfy the end user?

Let us consider that privacy is desirable, preferable. Imagine that we are there to serve users, rather than trying to protect ourselves from them.

We are providing a service to users, and in so doing, we process their personal data. Not everything that is available to us, but only what is needed for said service. Needed to satisfy the user, not to satisfy the service provider.

And since we have data about the user, we may as well show it to them, and allow them to take action. By displaying things in an understandable way, we create a phenomenon of trust. By giving power back to the user (to delete and correct, for example) we give them a more comfortable position.

You can guess what is coming: by placing the user back in the center, we fall naturally and logically back in line with GDPR obligations.

And yet, this part of the legislation is far too often misunderstood. The GDPR allows for a certain number of cases under which it is authorized to manipulate personal user data. Firstly, upon their request, to provide the service that is being sought. Secondly, for a whole range of legal obligations. Thirdly, for a few well-defined exceptions (research, police, law, absolute emergency, etc.). And finally, if there really is no good reason, you have to ask explicit consent from the user.

If we are asking the user’s consent, it is because we are in the process of damaging their privacy in a way that is not serving them. Consent is not the first condition of all personal data processing. On the contrary, it is the last. If there really is no legitimate motive, permission must be asked before processing the data.

Once this point has been raised, the key objection remains: the entire economic model of the digital world involves pillaging people’s private lives, to model and profile them, sell targeted advertising for as much money as possible, and predict user behavior. In short, if you want to exist online, you have to follow the American model.

Protectionism

Let us try another approach. Consider that the GDPR is a text that protects Europeans, imposing our values (like respect of privacy) in a world that ignores them. The legislation tells us that companies that do not respect these values are not welcome in the European Single Market. From this point of view, the GDPR has a clear protectionist effect: European companies respect the GDPR, while others do not. A European digital ecosystem can come into being with protected access to the most profitable market in the world.

From this perspective, privacy is seen as a positive thing for both companies and users. A bit like how a restaurant owner handles hygiene standards: a meticulous, serious approach is needed, but it is important to do so to protect customers, and it is in their interest to have an exemplary reputation. Furthermore, it is better if it is mandatory, so that the bottom-feeders who don’t respect the most basic rules disappear from the market.

And here, it is exactly the same mechanism. Consider that users are allies and put them back in the center of the game. If we have data on them, we may as well tell them, show them, and so on.

Here, a key element enters in play. Because, as long as Europe’s digital industry remains stuck on the American model and rejects the GDPR, it is in the opposite position. The business world does not like to comply with standards when it does not understand their utility. It debates with inspecting authorities to request softer rules, delays, adjustments, exceptions, etc. And so, it asks that the weapon created to protect European companies be disarmed and left on standby.

It is a Nash equilibrium. It is in the interest of all European companies to use the GDPR’s protectionist aspect to their advantage, but each believes that if they are the first, then they will lose out to those who do not respect the standards. Normally, to get out of this kind of toxic equilibrium, it takes a market regulation initiative. Ideally, a concerted effort to stimulate movement in the right direction. For now, the closest thing to a regulatory initiative are the increasingly high sanctions being dealt out all over Europe.

Standing out from the crowd

Of course, the digital reality of today is often not that simple. Data travels, changes hands, collected in one place but exploited in another. To successfully show users the processing of their data, often many things need to be reworked. The process needs to be focused on the end user rather than on the activity.

And even so, there are some cases where this kind of transparent approach is impossible. For example, the data that is collected to be used for targeted ad profiling. This data is nearly always transmitted to third parties, to be used in ways that are not in direct connection with the service that the user subscribed to. This is the typical use-case for which we try to obtain user consent (without which the processing is illegal) but where it is clear that transparency is impossible and informed consent is unlikely.

Two major categories are taking shape. The first includes digital services that can place the user at the center, and present themselves as allies, demonstrating a very high level of transparency. And the second represents digital services that are incapable of presenting themselves as allies.

So clearly, a company’s position on the question of privacy can be a positive feature that sets them apart. By aiming to defend user interests, we improve compliance with regulation, instead of trying to comply without understanding. We form an alliance with the user. And that is precisely what changes everything.

Benjamin Bayart

Gouvernance des données

Data governance: trust it (or not?)

The original version of this article (in French) was published in the quarterly newsletter no. 20 (March 2021) of the Values and Policies of Personal Information (VP-IP) Chair.

On 25 November 2020, the European Commission published its proposal for the European data governance regulation, the Data Governance Act (DGA) which aims to “unlock the economic and societal potential of data and technologies like artificial intelligence “. The proposed measures seek to facilitate access to and use of an ever-increasing volume of data. As such, the text seeks to contribute to the movement of data between member states of the European Union (as well as with States located outside the EU) by promoting the development of “trustworthy” systems for sharing data within and across sectors.

Part of a European strategy for data

This proposal is the first of a set of measures announced as part of the European strategy for data presented by the European Commission in February 2020. It is intended to dovetail with two other proposed regulations dated on 15 December 2020: the Digital Services Act (which aims to regulate the provision of online services, while maintaining the principle of the prohibition of a surveillance obligation) and the Digital Market Act (which organizes the fight against unfair practices by big platforms against companies who offer services through their platforms). A legislative proposal for the European Health Data Space is expected for the end of 2021 and possibly a “data law.”

The European Commission also plans to create nine shared European data spaces in strategic economic sectors and public interest areas, from the manufacturing industry to energy, or mobility, health, financial data and green deal data. The first challenge to overcome in this new data ecosystem will be to transcend national self-interests and those of the market.  

The Data Governance Act proposal does not therefore regulate online services, content or market access conditions: it organizes “data governance,” meaning the conditions for sharing data, with the market implicitly presumed to be the paradigm for sharing. This is shown in particular by an analysis carried out through the lens of trust (which could be confirmed in many other ways).

The central role of trust

Trust plays a central and strategic role in all of this legislation since the DGA “aims to foster the availability of data for use, by increasing trust in data intermediaries and by strengthening data-sharing mechanisms across the EU.” “Increasing trust”, “building trust”, ensuring a “higher level of trust”, “creating trust”, “taking advantage of a trustworthy environment”, “bringing trust” – these expressions appearing throughout the text point to its fundamental aim.

However, despite the fact that the proposal takes great care to define the essential terms on which it is based (“data“, “reuse”, “non-personal data”, “data holder”, “data user”, “data altruism” etc.), the term “trust,” along with the conditions for ensuring it, are exempt from such semantic clarification – even though “trust” is mentioned some fifteen times.

As in the past with the concept of dignity, which was part of the sweeping declarations of rights and freedoms in the aftermath of the Second World War but was nevertheless undefined –  despite the fact that it is the cornerstone of all bioethical texts, the concept of trust is never made explicit. Lawmakers, and those to whom the obligations established by the legal texts are addressed, are expected to know enough about what dignity and trust are to implicitly share the same understanding. As with the notion of time for Saint Augustine, everyone is supposed to understand what it is, even though they are unable to explain it to someone else.

While some see this as allowing for a certain degree of “flexibility” to adapt the concept of trust to a wide range of situations and a changing society, like the notion of privacy, others see this vagueness – whether intentional or not – at best, as a lack of necessary precision, and at worst, as an undeclared intention.

The implicit understanding of trust

In absolute terms, it is not very difficult to understand the concept of trust underlying the DGA (like in the Digital Services Act in which the European Commission proposes, among other things, a new mysterious category of “trusted flaggers“). To make it explicit, the main objectives of the text must simply be examined more closely.

The DGA represents an essential step for open data. The aim is clearly stated: to set out the conditions for the development of the digital economy by creating a single data market. The goal therefore focuses on introducing a fifth freedom: the free movement of data, after the free movement of goods, services, capital and people.  

While the GDPR created a framework for personal data protection, the DGA proposal intends to facilitate its exchange, in compliance with all the rules set out by the GDPR (in particular data subjects’ rights and consent when appropriate).

The scope of the proposal is broad.

The term data is used to refer to both personal data and non-personal data, whether generated by public bodies, companies or citizens. As a result, interaction with the personal data legislation is particularly significant. Moreover, the DGA proposal is guided by principles for data management and re-use that were developed for research data. The “FAIR” principles for data stipulate that this data must be easy to find, accessible, interoperable and re-usable, while providing for exceptions that are not listed and unspecified at this time.

To ensure trust in the sharing of this data, the category of “data intermediary” is created, which is the precise focus of all the political and legal discourse on trust. In the new “data spaces” which will be created (meaning beyond those designated by the European Commission), data sharing service providers will play a strategic role, since they are the ones who will ensure interconnections between data holders/producers and data users.

The “trust” which the text seeks to increase works on three levels:

  1. Trust among data producers (companies, public bodies data subjects) to share their data
  2.  Trust among data users regarding the quality of this data
  3. Trust among trustworthy intermediaries in the various data spaces

Data intermediaries

This latter group emerges as organizers for data exchange between companies (B2B) or between individuals and companies (C2B). They are the facilitators of the single data market. Without them, it is not possible to create it from a technical viewpoint or make it work. This intermediary position allows them to have access to the data they make available; it must be ensured that they are impartial.

The DGA proposal differentiates between two types of intermediaries: “data sharing service providers,” meaning those who work “against remuneration in any form”  with regard to both personal and non-personal data (Chapter III) and “data altruism organisations” who act “without seeking a reward…for purposes of general interest such as scientific research or improving public services” (Chapter VI).

For the first category, the traditional principle of neutrality is applied.

To ensure this neutrality, which “is a key element to bring trust, it is therefore necessary that data sharing service providers act only as intermediaries in the transactions, and do not use the data exchanged for any other purpose”. This is why data sharing services must be set up as legal entities that are separate from other activities carried out by the service provider in order to avoid conflicts of interest. In the division of digital labor, intermediation becomes a specialization in its own right. To create a single market, we fragment the technical bodies that make it possible, and establish a legal framework for their activities.

In this light, the real meaning of “trust” is “security” – security for data storage and transmission, nothing more, nothing less. Personal data security is ensured by the GDPR; the security of the market here relates to that of the intermediaries (meaning their trustworthiness, which must be legally guaranteed) and the transactions they oversee, which embody the effective functioning of the market.

From the perspective of a philosophical theory of trust, all of the provisions outlined in the DGA are therefore meant to act on the motivation of the various stakeholders, so that they feel a high enough level of trust to share data. The hope is that a secure legal and technical environment will allow them to transition from simply trusting in an abstract way to having trust in data sharing in a concrete, unequivocal way.

It should be noted, however, that when there is a conflict of values between economic or entrepreneurial freedom and the obligations intended to create conditions of trust, the market wins. 

In the impact assessment carried out for the DA proposal, the Commission declared that it would choose neither a high-intensity regulatory intervention option (compulsory certification for sharing services or compulsory authorization for altruism organizations), nor a low-intensity regulatory intervention option (optional labeling for sharing services or voluntary certification for altruism organizations). It opted instead for a solution it describes as “alternative” but which is in reality very low-intensity (lower even, for example, than optional labeling in terms of guarantees of trust). In the end, a notification obligation with ex post monitoring of compliance for sharing services was chosen, along with the simple possibility of registering as an “organisation engaging in data altruism.”

It is rather surprising that the strategic option selected includes so few safeguards to ensure the security and trust championed so frequently by the European Commission champion in its official communication.

An intention based on European “values”

Margrethe Vestager, Executive Vice President of the European Commission strongly affirmed this: “We want to give business and citizens the tools to stay in control of data. And to build trust that data is handled in line with European values and fundamental rights.”

But in reality, the text’s entire reasoning shows that the values underlying the DGA are ultimately those of the market – a market that admittedly respects fundamental European values, but that must entirely shape the European data governance model. This offers a position to take on the data processing business model used by the major tech platforms. These platforms, whether developed in the Silicon Valley ecosystem or another part of the world with a desire to dominate, have continued to gain disproportionate power in light of their business model. Their modus operandi is inherently based on the continuous extraction and complete control of staggering quantities of data.

The text is thus based on a set of implicit reductions that are presented as indisputable policy choices. The guiding principle, trust, is equated with security, meaning security of transactions. Likewise, the European values as upheld in Article 2 of the Treaty on European Union, which do not mention the market, are implicitly related to those that make the market work. Lastly, governance, a term that has a strong democratic basis in principle, which gives the DGA its title, is equated only with the principles of fair market-based sharing, with the purported aim, among other things, to feed the insatiable appetite of “artificial intelligence”.

As for “data altruism,” it is addressed in terms of savings in transaction costs (in this case, costs related to obtaining consent), and the fact that altruism can be carried out “without asking for remuneration” does not change the market paradigm: a market exchange is a market exchange, even when it’s free.

By choosing a particular model of governance implicitly presented as self-evident, the Commission  fails to recognize other possible models that could be adopted to oversee the movement of data.  Just a few examples that could be explored and which highlight the many overlooked aspects of the text, are:

  1.  The creation of a public European public data service
  2. Interconnecting the public services of each European state (based on the eIDAS or Schengen Information System (SIS) model; see also France’s public data service, which presently applies to data created as part of public services by public bodies)
  3. An alternative to a public service: public officials, like notaries or bailiffs, acting under powers delegated by a level of public authority
  4. A market-based alternative: pooling of private and/or public data, initiated and built by private companies.

What kind of data governance for what kind of society?

This text, however, highlights an interesting concept in the age of the “reign of data”: sharing. While data is trivially understood as being the black gold of the 21st century, the comparison overlooks an unprecedented and essential aspect: unlike water, oil or rare metals, which are finite resources, data is an infinite resource, constantly being created and ever-expanding.

How should data be pooled in order to be shared?

Should data from the public sector be made available in order to transfer its value creation to the private sector? Or should public and private data be pooled to move toward a new sharing equation? Will we see the emergence of hybrid systems of values that are evenly distributed or a pooling of values by individuals and companies? Will we see the appearance of a “private data commons”? And what control mechanisms will it include?

Will individuals or companies be motivated to share their data? This would call for quite a radical change in economic culture.

The stakes clearly transcend the simple technical and legal questions of data governance. Since the conditions are those of an infinite production of data, these questions make us rethink the traditional economic model.

It is truly a new model of society that must be discussed. Sharing and trust are good candidates for rethinking the society to come, as long as they are not reduced solely to a market rationale.

The text, in its current form, certainly offers points to consider, taking into account our changing societies and digital practices. The terms, however, while attesting to worthwhile efforts for categorization adapted to these practices, require further attention and conceptual and operational precision.   

While there is undoubtedly a risk of systematic commodification of data, including personal data, despite the manifest wish for sharing, it must also be recognized that the text includes possible advances.  The terms of this collaborative writing  are up to us – provided, of course, that all of the stakeholders are consulted, including citizens, subjects and producers of this data.


Claire Levallois-Barth, lecturer in Law at Télécom Paris, coordinator of the VP-IP chair, co-founder of the VP-IP chair.

Mark Hunyadi, professor of moral and political philosophy at the Catholic University of Louvain (Belgium), member of the VP-IP chair.

Ivan Meseguer, European Affairs, Institut Mines-Télécom, co-founder of the VP-IP chair.

Facial recognition: what legal protection exists?

Over the past decade, the use of facial recognition has developed rapidly for both security and convenience purposes. This biometrics-based technology is used for everything from video surveillance to border controls and unlocking digital devices. This type of data is highly sensitive and is subject to specific legal framework. Claire Levallois-Barth, a legal researcher at Télécom Paris and coordinator of the Values and Policies of Personal Information Chair at IMT provides the context for protecting this data.

What laws govern the use of biometric data?

Claire Levallois-Barth: Biometric data “for the purpose of uniquely identifying a natural person” is part of a specific category defined by two texts adopted by the 27 Member States of the European Union in April 2016, the General Regulation Data Protection Regulation (GDPR) and the Directive for Police and Criminal Justice Authorities. This category of data is considered highly sensitive.

The GDPR applies to all processing of personal data in both private and public sectors.

The Directive for Police and Criminal Justice Authorities pertains to processing carried out for purposes of prevention, detection, investigation, and prosecution of criminal offences or the execution of criminal penalties by competent authorities (judicial authorities, police, other law enforcement authorities). It specifies that biometric data must only be used in cases of absolute necessity and must be subject to provision of appropriate guarantees for the rights and freedoms of the data subject. This type of processing may only be carried out in three cases: when authorized by Union law or Member State law, when related to data manifestly made public by the data subject, or to protect the vital interests of the data subject or another person.

What principles has the GDPR established?

CLB: The basic principle is that collecting and processing biometric data is prohibited due to significant risks of violating basic rights and freedoms, including the freedom to come and go anonymously. There are, however, a series of exceptions. The processing must fall under one of these exceptions (express consent from the data subject, protection of his or her vital interests, conducted for reasons of substantial public interest) and respect all of the obligations established by the GDPR. The key principle is that the use of biometric data must be strictly necessary and proportionate to the objective pursued. In certain cases, it is therefore necessary to obtain the individual’s consent, even when the facial recognition system is being used on an experimental basis. There is also the minimization principle, which systematically asks, “is there any less intrusive way of achieving the same goal?” In any case, organizations must carry out an impact assessment on people’s rights and freedoms.

What do the principles of proportionality and minimization look like in practice?

CLB: One example is when the Provence-Alpes-Côte d’Azur region wanted to experiment with facial recognition at two high schools in Nice and Marseille. The CNIL ruled that the system involving students, most of whom were minors, for the sole purpose of streamlining and securing access, was not proportionate to these purposes. Hiring more guards or implementing a badge system would offer a sufficient solution in this case.

Which uses of facial recognition have the greatest legal constraints?

CLB: Facial recognition can be used for various purposes. The purpose of authentication is to verify whether someone is who he or she claims to be. It is implemented in technology for airport security and used to unlock your smartphone. These types of applications do not pose many legal problems. The user is generally aware of the data processing that occurs, and the data is usually processed locally, by a card for example.

On the other hand, identification—which is used to identify one person within a group—requires the creation of a database that catalogs individuals. The size of this database depends on the specific purposes. However, there is a general tendency towards increasing the number of individuals. For example, identification can be used to find wanted or missing persons, or to recognize friends on a social network. It requires increased vigilance due to the danger of becoming extremely intrusive.

Facial recognition has finally provided a means of individualizing a person. There is no need to identify the individual–the goal is “simply” to follow people’s movements through the store to assess their customer journey or analyze their emotions in response to an advertisement or while waiting at the checkout. The main argument advertisers use to justify this practice is that the data is quickly anonymized, and no record is kept of the person’s face. Here, as in the case of identification, facial recognition usually occurs without the person’s knowledge.

How can we make sure that data is also protected internationally?

CLB: The GDPR applies in the 27 Member States of the European Union which have agreed on common rules. Data can, however, be collected by non-European companies. This is the case for photos of European citizens collected from social networks and news sites. This is one of the typical activities of the company Clearview AI, which has already established a private database of 3 billion photos.

The GDPR lays down a specific rule for personal data leaving European Union territory: it may only be exported to a country ensuring a level of protection deemed comparable to that of the European Union. Yet few countries meet this condition. A first option is therefore for the data importer and exporter to enter into a contract or adopt binding corporate rules. The other option, for data stored on servers on U.S. territory, was to build on the Privacy Shield agreement concluded between the Federal Trade Commission (FTC) and the European Commission. However, this agreement was invalidated by the Court of Justice of the European Union in the summer of 2020. We are currently in the midst of a legal and political battle. And the battle is complicated since data becomes much more difficult to control once it is exported. This explains why certain stakeholders, such as Thierry Breton (the current European Commissioner for Internal Market), have emphasized the importance of fighting to ensure European data is stored and processed in Europe, on Europe’s own terms.

Despite the risks and ethical issues involved, is facial recognition sometimes seen as a solution for security problems?

CLB: It can in fact be a great help when implemented in a way that respects our fundamental values. It depends on the specific terms. For example, if law enforcement officers know that a protest will be held, potentially involving armed individuals, at a specific time and place, facial recognition can prove very useful at that specific time and place. However, it is a completely different scenario if it is used constantly for an entire region and entire population in order to prevent shoplifting.

This summer, the London Court of Appeal ruled that an automatic facial recognition system used by Welsh police was unlawful. The ruling emphasized a lack of clear guidance on who could be monitored and accused law enforcement officers of failing to sufficiently verify whether the software used had any racist or sexist bias. Technological solutionism, a school of thought emphasizing new technology’s capacity to solve the world’s major problems, has its limitations.

Is there a real risk of this technology being misused in our society?

CLB: A key question we should ask is whether there is a gradual shift underway, caused by an accumulation of technology deployed at every turn. We know that video-surveillance cameras are installed in public roads, yet we do not know about additional features that are gradually added, such as facial recognition or behavioral recognition.  The European Convention of Human Rights, GDPR, the Directive for Police and Criminal Justice Authorities, and the CNIL provide safeguards in this area.

However, they provide a legal response to an essentially political problem. We must prevent the accumulation of several types of intrusive technologies that come without prior reflection on the overall result, without taking a step back to consider the consequences. What kind of society do we want to build together? Especially within the context of a health and economic crisis. The debate on our society remains open, as do the means of implementation.

Interview by Antonin Counillon