Posts

digital sovereignty

Sovereignty and digital technology: controlling our own destiny

Annie Blandin-Obernesser, IMT Atlantique – Institut Mines-Télécom

Facebook has an Oversight Board, a kind of “Supreme Court” that rules on content moderation disputes. Digital giants like Google are investing in the submarine telecommunications cable market. France has had to back pedal after choosing Microsoft to host the Health Data Hub.

These are just a few examples demonstrating that the way in which digital technology is developing poses a threat not only to the European Union and France’s economic independence and cultural identity. Sovereignty itself is being questioned, threatened by the digital world, but also finding its own form of expression there.

What is most striking is that major non-European digital platforms are appropriating aspects of sovereignty: a transnational territory, i.e. their market and site where they pronounce norms, a population of internet users, a language, virtual currencies, optimized taxation, and the power to issue rules and regulations. The aspect that is unique to the digital context is based on the production and use of data and control over information access. This represents a form of competition with countries or the EU.

Sovereignty in all its forms being questioned

The concept of digital sovereignty has matured since it was formalized around ten years ago as an objective to “control our own destinies online”. The current context is different to when it emerged. Now, it is sovereignty in general that is seeing a resurgence of interest, or even souverainism (an approach that prioritizes protecting sovereignty).

This topic has never been so politicized. Public debate is structured around themes such as state sovereignty regarding the EU and EU law, economic independence, or even strategic autonomy with regards to the world, citizenship and democracy.

In reality, digital sovereignty is built on the basis of digital regulation, controlling its material elements and creating a democratic space. It is necessary to take real action, or else risk seeing digital sovereignty fall hostage to overly theoretical debates. This means there are many initiatives that claim to be an integral part of sovereignty.

Regulation serving digital sovereignty

The legal framework of the online world is based on values that shape Europe’s path, specifically, protecting personal data and privacy, and promoting general interest, for example in data governance.

The text that best represents the European approach is the General Data Protection Regulation (GDPR), adopted in 2016, which aims to allow citizens to control their own data, similar to a form of individual sovereignty. This regulation is often presented as a success and a model to be followed, even if it has to be put in perspective.

New European digital legislation for 2022

The current situation is marked by proposed new digital legislation with two regulations, to be adopted in 2022.

It aims to regulate platforms that connect service providers and users or offer services to rank or optimize content, goods or services offered or uploaded online by third parties: Google, Meta (Facebook), Apple, Amazon, and many others besides.

The question of sovereignty is also present in this reform, as shown by the debate around the need to focus on GAFAM (Google, Amazon, Facebook, Apple and Microsoft).

On the one hand, the Digital Markets Act (the forthcoming European legislation) includes strengthened obligations for “gatekeeper” platforms, which intermediate and end-users rely on. This affects GAFAM, even if it may be other companies that are concerned – like Booking.com or Airbnb. It all depends on what comes out of the current discussions.

And on the other hand, the Digital Services Act is a regulation for digital services that will structure the responsibility of platforms, specifically in terms of the illegal content that they may contain.

Online space, site of confrontation

Having legal regulations is not enough.

“The United States have GAFA (Google, Amazon, Facebook and Apple), China has BATX (Baidu, Alibaba, Tencent and Xiaomi). And in Europe, we have the GDPR. It is time to no longer depend solely on American or Chinese solutions!” declared French President Emmanuel Macron during an interview on December 8 2020.

Interview between Emmanuel Macron and Niklas Zennström (CEO of Atomico). Source: Atomico on Medium.

The international space is a site of confrontation between different kinds of sovereignty. Every individual wants to truly control their own digital destiny, but we have to reckon with the ambition of countries that demand the general right to control or monitor their online space, such as the United States or China.

The EU and/or its member states, such as France, must therefore take action and promote sovereign solutions, or else risk becoming a “digital colony”.

Controlling infrastructure and strategic resources

With all the focus on intermediary services, there is not enough emphasis placed on the industrial dimension of this topic.

And yet, the most important challenge resides in controlling vital infrastructure and telecommunications networks. The question of submarine cables, used to transfer 98% of the world’s digital data, receives far less media attention than the issue of 5G devices and Huawei’s resistance. However, it demonstrates the need to promote our cable industry in the face of the hegemony of foreign companies and the arrival of giants such as Google or Facebook in the sector.

The adjective “sovereign” is also applied to other strategic resources. For example, the EU wants to secure its supply of semi-conductors, as currently, it depends on Asia significantly. This is the purpose of the European Chips Act, which aims to create a European ecosystem for these materials. For Ursula von der Leyen, “it is not only a question of competitiveness, but also of digital sovereignty.”

There is also the question of a “sovereign” cloud, which has been difficult to implement. There are many conditions required to establish sovereignty, including the territorialization of the cloud, trust and data protection. But with this objective in mind, France has created the label SecNumCloud and announced substantial funding.

Additionally, the adjective “sovereign” is used to describe certain kinds of data, for which states should not depend on anyone for their access, such as geographic data. In a general way, a consensus has been reached around the need to control data and access to information, particularly in areas where the challenge of sovereignty is greatest, such as health, agriculture, food and the environment. Development of artificial intelligence is closely connected to the status of this data.

Time for alternatives

Does all that mean facilitating the emergence of major European or national actors and/or strategic actors, start-ups and SMEs? Certainly, such actors will still need to show good intentions, compared to those that shamelessly exploit personal data, for example.

A pure alternative is difficult to bring about. This is why partnerships develop, although they are still highly criticized, to offer cloud hosting for example, like the collaboration between Thales and OVHcloud in October 2021.

On the other hand, there is reason to hope. Open-source software is a good example of a credible alternative to American private technology firms. It needs to be better promoted, particularly in France.

Lastly, cybersecurity and cyberdefense are critical issues for sovereignty. The situation is critical, with attacks coming from Russia and China in particular. Cybersecurity is one of the major sectors in which France is greatly investing at present and positioning itself as a leader.

Sovereignty of the people

To conclude, it should be noted that challenges relating to digital sovereignty are present in all human activities. One of the major revelations occurred in 2005, in the area of culture, when Jean-Noël Jeanneney observed that Google had defied Europe by creating Google Books and digitizing the continent’s cultural heritage.

The recent period reconnects with this vision, with cultural and democratic issues clearly essential in this time of online misinformation and its multitude of negative consequences, particularly for elections. This means placing citizens at the center of mechanisms and democratizing the digital world, by freeing individuals from the clutches of internet giants, whose control is not limited to economics and sovereignty. The fabric of major platforms is woven from the human cognitive system, attention and freedom. Which means that, in this case, the sovereignty of the people is synonymous with resistance.

Annie Blandin-Obernesser, Law professor, IMT Atlantique – Institut Mines-Télécom

This article was republished from The Conversation under the Creative Commons license. Read the original article here (in French).

privacy, data protection regulation

Privacy as a business model

The original version of this article (in French) was published in quarterly newletter no 22 (October 2021) from the Chair “Values and Policies of Personal Information”.

The usual approach

The GDPR is the most visible text on this topic. It is not the oldest, but it is at the forefront for a simple reason: it includes huge sanctions (up to 4% of consolidated international group turnover for companies). Consequently, this regulation is often treated as a threat. We seek to protect ourselves from legal risk.

The approach is always the same: list all data processed, then find a legal framework that allows you to keep to the same old habits. This is what produces the long, dry texts that the end-user is asked to agree to with a click, most often without reading. And abracadabra, a legal magic trick – you’ve got the user’s consent, you can continue as before.

This way of doing things poses various problems.

  1. It implies that privacy is a costly position, a risk, that it is undesirable. Communication around the topic can create a disastrous impression. The message on screen says one thing (in general, “we value your privacy”), while reality says the opposite (“sign the 73-page-long contract now, without reading it”). The user knows very well when signing that everyone is lying. No, they haven’t read it. And no, nobody is respecting their privacy. It is a phony contract signed between liars.
  2. The user is positioned as an enemy. Someone who you need to get to sign a document, more or less forced, in which they undertake not to sue, is an enemy. It creates a relationship of distrust with the user.

But we could see these texts with a completely different perspective if we just decided to change our point of view.

Placing the user at the center

The first approach means satisfying the legal team (avoiding lawsuits) and the IT department (a few banners and buttons to add, but in reality nothing changes). What about trying to satisfy the end user?

Let us consider that privacy is desirable, preferable. Imagine that we are there to serve users, rather than trying to protect ourselves from them.

We are providing a service to users, and in so doing, we process their personal data. Not everything that is available to us, but only what is needed for said service. Needed to satisfy the user, not to satisfy the service provider.

And since we have data about the user, we may as well show it to them, and allow them to take action. By displaying things in an understandable way, we create a phenomenon of trust. By giving power back to the user (to delete and correct, for example) we give them a more comfortable position.

You can guess what is coming: by placing the user back in the center, we fall naturally and logically back in line with GDPR obligations.

And yet, this part of the legislation is far too often misunderstood. The GDPR allows for a certain number of cases under which it is authorized to manipulate personal user data. Firstly, upon their request, to provide the service that is being sought. Secondly, for a whole range of legal obligations. Thirdly, for a few well-defined exceptions (research, police, law, absolute emergency, etc.). And finally, if there really is no good reason, you have to ask explicit consent from the user.

If we are asking the user’s consent, it is because we are in the process of damaging their privacy in a way that is not serving them. Consent is not the first condition of all personal data processing. On the contrary, it is the last. If there really is no legitimate motive, permission must be asked before processing the data.

Once this point has been raised, the key objection remains: the entire economic model of the digital world involves pillaging people’s private lives, to model and profile them, sell targeted advertising for as much money as possible, and predict user behavior. In short, if you want to exist online, you have to follow the American model.

Protectionism

Let us try another approach. Consider that the GDPR is a text that protects Europeans, imposing our values (like respect of privacy) in a world that ignores them. The legislation tells us that companies that do not respect these values are not welcome in the European Single Market. From this point of view, the GDPR has a clear protectionist effect: European companies respect the GDPR, while others do not. A European digital ecosystem can come into being with protected access to the most profitable market in the world.

From this perspective, privacy is seen as a positive thing for both companies and users. A bit like how a restaurant owner handles hygiene standards: a meticulous, serious approach is needed, but it is important to do so to protect customers, and it is in their interest to have an exemplary reputation. Furthermore, it is better if it is mandatory, so that the bottom-feeders who don’t respect the most basic rules disappear from the market.

And here, it is exactly the same mechanism. Consider that users are allies and put them back in the center of the game. If we have data on them, we may as well tell them, show them, and so on.

Here, a key element enters in play. Because, as long as Europe’s digital industry remains stuck on the American model and rejects the GDPR, it is in the opposite position. The business world does not like to comply with standards when it does not understand their utility. It debates with inspecting authorities to request softer rules, delays, adjustments, exceptions, etc. And so, it asks that the weapon created to protect European companies be disarmed and left on standby.

It is a Nash equilibrium. It is in the interest of all European companies to use the GDPR’s protectionist aspect to their advantage, but each believes that if they are the first, then they will lose out to those who do not respect the standards. Normally, to get out of this kind of toxic equilibrium, it takes a market regulation initiative. Ideally, a concerted effort to stimulate movement in the right direction. For now, the closest thing to a regulatory initiative are the increasingly high sanctions being dealt out all over Europe.

Standing out from the crowd

Of course, the digital reality of today is often not that simple. Data travels, changes hands, collected in one place but exploited in another. To successfully show users the processing of their data, often many things need to be reworked. The process needs to be focused on the end user rather than on the activity.

And even so, there are some cases where this kind of transparent approach is impossible. For example, the data that is collected to be used for targeted ad profiling. This data is nearly always transmitted to third parties, to be used in ways that are not in direct connection with the service that the user subscribed to. This is the typical use-case for which we try to obtain user consent (without which the processing is illegal) but where it is clear that transparency is impossible and informed consent is unlikely.

Two major categories are taking shape. The first includes digital services that can place the user at the center, and present themselves as allies, demonstrating a very high level of transparency. And the second represents digital services that are incapable of presenting themselves as allies.

So clearly, a company’s position on the question of privacy can be a positive feature that sets them apart. By aiming to defend user interests, we improve compliance with regulation, instead of trying to comply without understanding. We form an alliance with the user. And that is precisely what changes everything.

Benjamin Bayart

The Alicem app: a controversial digital authentication system

Laura Draetta, Télécom Paris – Institut Mines-Télécom and Valérie Fernandez, Télécom Paris – Institut Mines-Télécom

[dropcap]S[/dropcap]ome digital innovations, although considered to be of general interest, are met with distrust. A responsible innovation approach could anticipate and prevent such confidence issues.

“Alicem” is a case in point. Alicem is a smartphone app developed by the State to offer the French people a national identity solution for online administrative procedures. It uses face recognition as a technological solution to activate a user account and allow the person to prove their digital identity in a secure way.

After its authorization by decree of May 13, 2019 and the launch of the experimentation of a prototype among a group of selected users a few months later, Alicem was due to be released for the general public by the end of 2019.

However, in July of the same year, La Quadrature du Net, an association for the defense of rights and freedoms on the Internet, filed an appeal before the Council of State to have the decree authorizing the system annulled. In October 2019, the information was relayed in the general press and the app was brought to the attention of the general public. Since then, Alicem has been at the center of a public controversy surrounding its technological qualities, potential misuses and regulation, leading to it being put on hold to dispel the uncertainties.

At the start of the summer of 2020, the State announced the release of Alicem for the end of the autumn, more than a year later than planned in the initial roadmap. Citing the controversy on the use of facial recognition in the app, certain media actors argued that it was still not ready: it was undergoing further ergonomic and IT security improvements and a call to tender was to be launched to build “a more universal and inclusive offer” incorporating, among other things, alternative activation mechanisms to facial recognition.

Controversy as a form of “informal” technology assessment

The case of Alicem is similar to that of other controversial technological innovations pushed by the State such as the Linky meters, 5G and the StopCovid app, and leads us to consider controversy as a form of informal technology assessment that defies the formal techno-scientific assessments that public decisions are based on. This also raises the issue of a responsible innovation approach.

Several methods have been developed to evaluate technological innovations and their potential effects. In France, the Technology Assessment – a form of political research that examines the short- and long-term consequences of innovation – is commonly used by public actors when it comes to technological decisions.

In this assessment method, the evaluation is entrusted to scientific experts and disseminated among the general public at the launch of the technology. The biggest challenge with this method is supporting the development of public policies while managing the uncertainties associated with any technological innovation through evidence-based rationality. It must also “educate” the public, whose mistrust of certain innovations may be linked to a lack of information.

The approach is perfectly viable for informing decision-making when there is no controversy or little mobilization of opponents. It is less pertinent, however, when the technology is controversial. A technological assessment focused exclusively on scholarly expertise runs the risk of failing to take account of all the social, ethical and political concerns surrounding the innovation, and thus not being able to “rationalize” the public debate.

Participation as a pillar of responsible innovation

Citizen participation in technology assessment – whether to generate knowledge, express opinions or contribute to the creation and governance of a project – is a key element of responsible innovation.

Participation may be seen as a strategic tool for “taming” opponents or skeptics by getting them on board or as a technical democracy tool that gives voice to ordinary citizens in expert debates, but it is more fundamentally a means of identifying social needs and challenges upstream in order to proactively take them into account in the development phase of innovations.

In all cases, it relies on work carried out beforehand to identify the relevant audiences (users, consumers, affected citizens etc.) and choose their spokespersons. The definition of the problem, and therefore the framework of the response, depends on this identification. The case of Linky meters is an emblematic example: anti-radiation associations were not included in the discussions prior to deployment because they were not deemed legitimate to represent consumers; consequently, the figure of the “affected citizen” was nowhere to be seen during the discussions on institutional validation but is now at the center of the controversy.

Experimentation in the field to define problems more effectively

Responsible innovation can also be characterized by a culture of experimentation. During experimentation in the field, innovations are confronted with a variety of users and undesired effects are revealed for the first time.

However, the question of experimentation is too often limited to testing technical aspects. In a responsible innovation approach, experimentation is the place where different frameworks are defined, through questions from users and non-users, and where tensions between technical efficiency and social legitimacy emerge.

If we consider the Alicem case through the prism of this paradigm, we are reminded that technological innovation processes carried out in a confined manner – first of all through the creation of devices within the ecosystem of paying clients and designers and then through the experimentation of the use of artifacts already considered stable – inevitably lead to acceptability problems. Launching a technological innovation without participation in its development by the users undoubtedly makes the process faster, but may cost its legitimization and even lead to a loss of confidence for its promoters.

In the case of Alicem, the experiments carried out among “friends and family”, with the aim of optimizing the user experience, could be a case in point. This experimentation was focused more on improving the technical qualities of the app than on taking account of its socio-political dimensions (risk of infringing upon individual freedoms and loss of anonymity etc.). As a result, when the matter was reported in the media it was presented through an amalgamation of face recognition technology use cases and anxiety-provoking arguments (“surveillance”, “freedom-killing technology”, “China”, “Social credit” etc.). Without, however, presenting the reality of more common uses of facial recognition which carry the same risks as those being questioned.

These problems of acceptability encountered by Alicem are not circumstantial ones unique to a specific technological innovation, but must be understood as structural markers of the contemporary social functioning. For, although the “unacceptability” of this emerging technology is a threat for its promoters and a hindrance to its adoption and diffusion, it is above all indicative of a lack of confidence in the State that supersedes the reality of the intrinsic qualities of the innovation itself.

This text presents the opinions stated by the researchers Laura Draetta and Valérie Fernandez during their presentation at the Information Mission on Digital Identity of the National Assembly in December 2019. It is based on the case of the biometric authentication app Alicem, which sparked controversy in the public media sphere from the first experiments.

Laura Draetta, a Lecturer in Sociology, joint holder of the Responsibility for Digital Identity Chair, Research Fellow Center for Science, Technology, Medicine & Society, University of California, Berkeley, Télécom Paris – Institut Mines-Télécom and Valérie Fernandez, Professor of Economics, Holder of the Responsibility for Digital Identity chair, Télécom Paris – Institut Mines-Télécom

This article was republished from The Conversation under the Creative Commons license. Read the original article here.