privacy, data protection regulation

Privacy as a business model

The original version of this article (in French) was published in quarterly newletter no 22 (October 2021) from the Chair “Values and Policies of Personal Information”.

The usual approach

The GDPR is the most visible text on this topic. It is not the oldest, but it is at the forefront for a simple reason: it includes huge sanctions (up to 4% of consolidated international group turnover for companies). Consequently, this regulation is often treated as a threat. We seek to protect ourselves from legal risk.

The approach is always the same: list all data processed, then find a legal framework that allows you to keep to the same old habits. This is what produces the long, dry texts that the end-user is asked to agree to with a click, most often without reading. And abracadabra, a legal magic trick – you’ve got the user’s consent, you can continue as before.

This way of doing things poses various problems.

  1. It implies that privacy is a costly position, a risk, that it is undesirable. Communication around the topic can create a disastrous impression. The message on screen says one thing (in general, “we value your privacy”), while reality says the opposite (“sign the 73-page-long contract now, without reading it”). The user knows very well when signing that everyone is lying. No, they haven’t read it. And no, nobody is respecting their privacy. It is a phony contract signed between liars.
  2. The user is positioned as an enemy. Someone who you need to get to sign a document, more or less forced, in which they undertake not to sue, is an enemy. It creates a relationship of distrust with the user.

But we could see these texts with a completely different perspective if we just decided to change our point of view.

Placing the user at the center

The first approach means satisfying the legal team (avoiding lawsuits) and the IT department (a few banners and buttons to add, but in reality nothing changes). What about trying to satisfy the end user?

Let us consider that privacy is desirable, preferable. Imagine that we are there to serve users, rather than trying to protect ourselves from them.

We are providing a service to users, and in so doing, we process their personal data. Not everything that is available to us, but only what is needed for said service. Needed to satisfy the user, not to satisfy the service provider.

And since we have data about the user, we may as well show it to them, and allow them to take action. By displaying things in an understandable way, we create a phenomenon of trust. By giving power back to the user (to delete and correct, for example) we give them a more comfortable position.

You can guess what is coming: by placing the user back in the center, we fall naturally and logically back in line with GDPR obligations.

And yet, this part of the legislation is far too often misunderstood. The GDPR allows for a certain number of cases under which it is authorized to manipulate personal user data. Firstly, upon their request, to provide the service that is being sought. Secondly, for a whole range of legal obligations. Thirdly, for a few well-defined exceptions (research, police, law, absolute emergency, etc.). And finally, if there really is no good reason, you have to ask explicit consent from the user.

If we are asking the user’s consent, it is because we are in the process of damaging their privacy in a way that is not serving them. Consent is not the first condition of all personal data processing. On the contrary, it is the last. If there really is no legitimate motive, permission must be asked before processing the data.

Once this point has been raised, the key objection remains: the entire economic model of the digital world involves pillaging people’s private lives, to model and profile them, sell targeted advertising for as much money as possible, and predict user behavior. In short, if you want to exist online, you have to follow the American model.

Protectionism

Let us try another approach. Consider that the GDPR is a text that protects Europeans, imposing our values (like respect of privacy) in a world that ignores them. The legislation tells us that companies that do not respect these values are not welcome in the European Single Market. From this point of view, the GDPR has a clear protectionist effect: European companies respect the GDPR, while others do not. A European digital ecosystem can come into being with protected access to the most profitable market in the world.

From this perspective, privacy is seen as a positive thing for both companies and users. A bit like how a restaurant owner handles hygiene standards: a meticulous, serious approach is needed, but it is important to do so to protect customers, and it is in their interest to have an exemplary reputation. Furthermore, it is better if it is mandatory, so that the bottom-feeders who don’t respect the most basic rules disappear from the market.

And here, it is exactly the same mechanism. Consider that users are allies and put them back in the center of the game. If we have data on them, we may as well tell them, show them, and so on.

Here, a key element enters in play. Because, as long as Europe’s digital industry remains stuck on the American model and rejects the GDPR, it is in the opposite position. The business world does not like to comply with standards when it does not understand their utility. It debates with inspecting authorities to request softer rules, delays, adjustments, exceptions, etc. And so, it asks that the weapon created to protect European companies be disarmed and left on standby.

It is a Nash equilibrium. It is in the interest of all European companies to use the GDPR’s protectionist aspect to their advantage, but each believes that if they are the first, then they will lose out to those who do not respect the standards. Normally, to get out of this kind of toxic equilibrium, it takes a market regulation initiative. Ideally, a concerted effort to stimulate movement in the right direction. For now, the closest thing to a regulatory initiative are the increasingly high sanctions being dealt out all over Europe.

Standing out from the crowd

Of course, the digital reality of today is often not that simple. Data travels, changes hands, collected in one place but exploited in another. To successfully show users the processing of their data, often many things need to be reworked. The process needs to be focused on the end user rather than on the activity.

And even so, there are some cases where this kind of transparent approach is impossible. For example, the data that is collected to be used for targeted ad profiling. This data is nearly always transmitted to third parties, to be used in ways that are not in direct connection with the service that the user subscribed to. This is the typical use-case for which we try to obtain user consent (without which the processing is illegal) but where it is clear that transparency is impossible and informed consent is unlikely.

Two major categories are taking shape. The first includes digital services that can place the user at the center, and present themselves as allies, demonstrating a very high level of transparency. And the second represents digital services that are incapable of presenting themselves as allies.

So clearly, a company’s position on the question of privacy can be a positive feature that sets them apart. By aiming to defend user interests, we improve compliance with regulation, instead of trying to comply without understanding. We form an alliance with the user. And that is precisely what changes everything.

Benjamin Bayart

RI-URBANS

Improving air quality with decision-making tools

Launched in October for a four-year period, the RI-URBANS project aims to strengthen synergies between European air quality monitoring networks and research infrastructures in the field of atmospheric sciences. IMT Nord Europe is a partner for this project, which received up to €8 million of funding from the European Union. Interview with  Stéphane Sauvage, professor, and Thérèse Salameh, R&D engineer.

European project RI-URBANS[1] was submitted in response to a call for tender dedicated to research infrastructures (RI) capable of tackling the challenges set by the European Green Deal. What is it all about?

Stéphane Sauvage The EU aims to play a leading role in fighting climate change at a global level. In a communication dated 14 July 2021, the 27 member states committed to turning the EU into the first climate neutral continent by 2050. To achieve this, they committed to reduce their greenhouse gas emissions by at least 55% by 2030, compared to levels in 1990, and to implement a series of initiatives related to the climate, energy, agriculture, industry, environment, oceans, etc.. Specifically, the Green Deal aims to protect our biodiversity and ecosystems, transition to a circular economy and reduce air, water and soil pollution. RI-URBANS falls under this initiative to reduce air pollution.

What is the goal of RI-URBANS?

S.S. Within this project, the objective is to connect the Aerosol, Clouds, and Trace gases Research InfraStructure (ACTRIS), Integrated Carbon Observation System (ICOS) and In-service Aircraft for a Global Observing System (IAGOS) – combining stationary and mobile observation and exploration platforms, calibration centers and data centers – with local stakeholders, such as air quality monitoring agencies, political decision-makers or regional stakeholders. The main objective is to provide them with high quality data and develop innovative service tools allowing them to better evaluate the health impact, identify sources of pollution in real time and forecast atmospheric pollution, in order to help decision-makers in improving air quality.

How will these tools be developed?

S.S. RI-URBANS will focus on ambient nanoparticles and atmospheric particulate matter, their sizes, constituents, source contributions, and gaseous precursors, evaluating novel air quality parameters, source contributions, and their associated health effects to demonstrate the European added value of implementing such service tools. To determine which areas are of interest, we have first to collect the available data on these variables and make it findable, accessible, interoperable and reusable, while offering decision-makers services and tools.

In order to test these services, a pilot phase will be deployed in nine European cities (Athens, Barcelona, Birmingham, Bucharest, Helsinki, Milan, Paris, Rotterdam-Amsterdam and Zurich). These cities have been identified as industrial, port, airport and road hotspots, with significant levels of pollution and have established air quality monitoring networks and research infrastructure units. In Paris, for example, the atmospheric research observatory SIRTA is a unit of ACTRIS and one of the most prominent sites in Europe offering the instrumentation, equipment and hosting capacities needed to study atmospheric physico-chemical processes.

What expertise do the IMT Nord Europe researchers bring?

Thérèse Salameh IMT Nord Europe research teams have internationally recognized expertise in the field of reactive trace gases, which can lead to the formation of secondary compounds, such as ozone or secondary organic aerosols. IMT Nord Europe’s participation in this project is connected to its significant involvement in the ACTRIS (Aerosol, Clouds, and Trace Gases Research InfraStructure) RI as a unit of the European Topical Center for reactive trace gases in situ measurements (CiGas). ACTRIS is a distributed RI bringing together laboratories of excellence and observation and exploration platforms, to support research on climate and air quality. It helps improve understanding of past, present and future changes in atmospheric composition and the physico-chemical processes that contribute to regional climate.

Who are the partners of RI-URBANS?

T.S. The project brings together 28 institutions (universities and research institutes) from 14 different countries. The three partners in France are the National Centre for Scientific Research (CNRS), National Institute for Industrial Environment and Risks (INERIS) and Institut Mines-Télécom (IMT). For this project, IMT Nord Europe researchers are collaborating in particular with Swiss federal laboratories for materials science and technology EmpaPaul Scherrer Institute (PSI)Spanish National Research Council (CSIC) and INERIS.

The project has just been launched. What is the next step for IMT Nord Europe?

T.S. In the coming months, we will conduct an assessment collecting observation data for reactive trace gases potentially available in main European cities. We will then need to evaluate the quality and relevance of the collected information, before applying source apportionment models to identify the main sources of pollution in these European cities.

[1] This project is funded by Horizon 2020, the European Union framework program for research and innovation (H2020), with grant agreement ID 101036245. It is conjointly coordinated by CSIC (Spain) and University of Helsinki (Finland)Find out more.

Read on I’MTech

Cleaning up polluted tertiary wastewater from the agri-food industry with floating wetlands

In 2018, IMT Atlantique researchers launched the FloWAT project, based on a hydroponic system of floating wetlands. It aims to reduce polluting emissions from treated wastewater into the discharge site.

Claire Gérente, researcher at IMT Atlantique, has been coordinating the FloWat1 decontamination project, funded by the French National Agency for Research (ANR), since its creation. The main aim of the initiative is to provide complementary treatment for tertiary wastewater from the agri-food industry, using floating wetlands. Tertiary wastewater is effluent that undergoes a final phase in the water treatment process to eliminate residual pollutants. It is then drained into the discharge site, an aquatic ecosystem where treated wastewater is released.

These wetlands act as filters for particle and dissolved pollutants. They can easily be added to existing waste stabilization pond systems in order to further treat this water. One of this project’s objectives is to improve on conventional floating wetlands to increase phosphorus removal, or even collect it for reuse, thereby reducing the pressure on this non-renewable resource.

In this context, research is being conducted around the use of a particular material, cellular concrete, to allow phosphorus to be recovered. “Phosphorus removal is of great environmental interest, particularly as it reduces the eutrophication of natural water sources that are discharge sites for treated effluent,” states Gérente. Eutrophication is a process characterized by an increase in nitrogen and phosphorus concentration in water, leading to ecosystem disruption.

Floating wetlands: a nature-based solution

The floating wetland system involves covering an area of water, typically a pond, with plants placed on a floating bed, specifically sedges. The submerged roots act as filters, retaining the pollutants found in the water via various physical, chemical and biological processes. This mechanism is called phytopurification.

Floating wetlands are part of an approach known as nature-based solutions, whereby natural systems, less costly than conventional technologies, are implemented to respond to ecological challenges. To function efficiently, the most important thing is to “monitor that the plants are growing well, as they are the site of decontamination,” emphasizes Gérente.

In order to meet the project objectives, a pilot study was set up on an industrial abattoir and meat processing site. After being biologically treated, real agri-food effluent is discharged into four pilot ponds, three of which that are covered with floating wetlands of various sizes, and one that is uncovered, as a control. The experimental site is entirely automated and can be controlled remotely to facilitate supervision.

Performance monitoring is undertaken for the treatment of organic matter, nitrogen, phosphorus and suspended matter. As well as data on the incoming and outgoing water quality, physico-chemical parameters and climate data are constantly monitored. The outcome for pollutants in the different components of the treatment system will be identified by sampling and analysis of plants, sediment and phosphorus removal material.

These floating wetlands will be the first to be easy to dismantle and recycle, improved for phosphorus removal and even collection, as well as able to treat suspended matter, carbon pollution and nutrients.

L’attribut alt de cette image est vide, son nom de fichier est MF-2.jpg.
Photograph of the experimental system

Improving compliance with regulation

In 1991, the French government established a limit on phosphorus levels to reduce water pollution, in order to preserve biodiversity and prevent algal bloom, which is when one or several algae species grow rapidly in an aquatic system.

The floating wetlands developed by IMT Atlantique researchers could allow these thresholds to be better respected, by improving capacities for water treatment. Furthermore, they are part of a circular economy approach, as beyond collecting phosphorus for reuse, the cellular concrete and polymers used as plant supports are recyclable or reusable.

Further reading on I’MTech: Circular economy, environmental assessment and environmental budgeting

To create these wetlands, you simply have to place the plants on the discharge ponds. This makes this technique cheap and easy to implement. However, while such systems integrate rather well into the landscape, they are not suitable for all environments. The climate in northern countries, for example, may slow down or impair how the plants function. Furthermore, results take longer to obtain with natural methods like floating wetlands than with conventional methods. Nearly 7000 French agri-food companies have been identified as potential users for these floating wetlands. Nevertheless, the FloWAT coordinator reminds us that “this project is a feasability study, our role is to evaluate the effectiveness of floating wetlands as a filtering system. We will have to wait until the project finishes in 2023 to find out if this promising treatment system is effective.

Rémy Fauvel

Antoine Fécant

Antoine Fécant, winner of the 2021 IMT-Académie des Sciences Young Scientist Prize

Antoine Fécant, new energy materials researcher at IFP Energies Nouvelles, has worked on many projects relating to solar and biosourced fuel production and petrol refining. His work has relevance for the energy transition and, this year, was recognized by the IMT-Académie des Science Young Scientist Prize.

Energy is a central part of our lifestyles,” affirms Antoine Fécant, new energy materials researcher at IFP Energies Nouvelles. “When I was younger, I wanted to work in this area and my interest in chemistry convinced me to pursue this field. I have always been attracted by the beauty of science, and I find even greater satisfaction in directing my work so that it is concretely useful for our society.” His research since 2004 has mainly focused on materials that speed up chemical processes, known as catalysts.

Antoine Fécant’s initial research was based on a class of catalysts called zeolites. Zeolites are materials mainly made of silicon, aluminum and oxygen. They are found naturally, but it is also possible and often preferable to synthesize them. These minerals contain networks of porosity that can be used to limit the quantity of by-products generated. Zeolites are useful for optimizing the yield of chemical reactions and energy consumption, and thus limiting the CO2 and waste produced.

The main idea of Antoine Fécant’s thesis, undertaken between 2004 and 2007, was to develop a unique methodology to generate new zeolites. For this, he used a multidisciplinary approach and chose to pair combinatorial chemistry with molecular modeling to “identify ways to synthesize zeolites depending on the kind of porous structure desired,” he describes. This methodology allowed us to define streamlining criteria and therefore very significantly speed up research and development work in this area,” Antoine Fécant continues.

15 years ago, this approach was completely innovative and won him the “Yves Chauvin” thesis prize in 2008. Now, however, it is widespread in the fields of chemistry, biochemistry and genomics, showing the trailblazing nature of the researcher’s approach.

Improving solar energy production and recycling CO2

After completing his PhD, Antoine Fécant took the post of research engineer at IFP Énergies Nouvelles. Continuing to pursue his goal of offering technical solutions to contain greenhouse gas emissions, in 2011, the researcher began a project aiming to develop materials and processes to recycle CO2 using solar energy. This work won him the 2012 Young Researcher Award from the City of Lyon. The initiative stems from the intermittent nature of solar power. It is based on the idea that a phase directly converting/storing this energy flow as an easily usable energy source would allow it to be better exploited.

Further reading on I’MTech: What is renewable energy storage?

To get around this disadvantage, we wanted to find a way to store solar energy as a fuel,” states Antoine Fécant. “This would make it possible to create energy reserves in a form that is already known and usable in various common applications, such as heating, vehicles or in the industrial and transport sectors,” he adds. To achieve this goal, the researcher based his research work on the principle of natural photosynthesis: capturing light energy to convert CO2 and water to more complex carbon molecules that can be used as energy.

In order to artificially transform solar energy into chemical energy, Antoine Fécant and his team, in collaboration with academic actors, developed several families of specific materials. Known as photocatalysts, these materials have been optimized by researchers in terms of their characteristics and structures on a nanometric scale. One of the compounds developed is a family of monolithic materials made from silicon and titanium dioxide, allowing for better use of incident photons through a “nano-mirror” effect. Other families of materials with composite architecture are able to reproduce the energetic processes in multiple complex phases of natural photosynthesis. Lastly, entirely new crystalline structures give greater mobility to the electrical charges needed to convert CO2.

According to Antoine Fécant, “these materials are interesting, but at present, they only allow us to overcome a single obstacle at a time, out of many. Now, we have to work on creating synergy between these new catalyst systems to efficiently perform CO2 photoconversion and reach an energy yield threshold of at least 10% for this means of energy production to be considered viable.” The researcher believes it will still be several decades before this process can be deployed on an industrial scale.

Catalyzing the production of biosourced and fossil fuels

Antoine Fécant has also undertaken research to reduce the environmental impact of the use of conventional fuels and their manufacturing processes. For this, he designed higher-performing catalysts that help to improve the energy efficiency of processes and thereby limit related CO2 emissions. The researcher has also participated in discovering catalysts that increase yields in the Fischer-Tropsch process, a key phase in transforming lignocellulosic biomass to produce advanced biofuels. Furthermore, these fuels could contribute to limiting the aviation sector’s carbon footprint.

By winning the IMT-Académie des Sciences Young Scientist Award, Antoine Fécant hopes to shine a light on research into solar fuel and hopes that “this area will be more highly valued”. Such fuels could truly represent a promising avenue to make better use of solar energy, by controlling its intermittent nature. “Research into these topics needs to be supported in the long term in order to contribute to the paradigm shifts needed for our energy consumption,” concludes the prizewinner.

Rémy Fauvel

[box type=”shadow” align=”” class=”” width=””]

From energy to tires

During his career, Antoine Fécant has also participated in a collaborative project on the production of biosourced compounds. The aim of this project was to design a process to manufacture butadiene, a key molecule in the composition of tires, using non-food plant resources. It is commonly produced using fossil fuels, but researchers have found a way to generate it using lignocellulosic compounds. Project teams have managed to refine a process and associated catalysts, making it possible to transform ethanol into butadiene using condensation. This 10-year-old project is now in its final phases.

[/box]

David Gesbert, winner of the 2021 IMT-Académie des Sciences Grand Prix

EURECOM researcher David Gesbert is one of the pioneers of Multiple-Input Multiple-Output (MIMO) technology, used nowadays in many wireless communication systems. He contributed to the boom in WiFi, 3G, 4G and 5G technology, and is now exploring what could be the 6G of the future. In recognition of his body of work, Gesbert has received the IMT-Académie des Sciences Grand Prix.

I’ve always been interested by research in the field of telecommunications. I was fascinated by the fact that mathematical models could be converted into algorithms used to make everyday objects work,” declares David Gesbert, researcher and specialist in wireless telecommunications systems at EURECOM. Since he completed his studies in 1997, Gesbert has been working on MIMO, a telecommunications system that was created in the 1990s. This technology makes it possible to transfer data streams at high speeds, using multiple transmitters and receivers (such as telephones) in conjunction. Instead of using a single channel to send information, a transmitter can use multiple spatial streams at the same time. Data is therefore transferred more quickly to the receiver. This spatialized system represents a breaking point with previous modes of telecommunication, like the Global System for Mobile Communications (GSM).

It has proven to be an important innovation, as MIMO is now broadly used in WiFi systems and several generations of mobile telephone networks, such as 4G and 5G. After receiving his PhD from École Nationale Supérieure des Télécommunications in 1997, Gesbert completed two years of postdoctoral research at Stanford University. He joined the telecommunications laboratory directed by Professor Emeritus Arogyaswami Paulraj, an engineer who worked on the creation of MIMO. In the early 2000s, the two scientists, accompanied by two students, launched the start-up Iospan Wireless. This was where they developed the first high-speed wireless modem using MIMO-OFDM technology.

OFDM: Orthogonal Frequency-Division Multiplexing

OFDM is a process that improves communication quality by dividing a high-debit data stream into many low-debit data streams. By combining this mechanism with MIMO, it is possible to transfer data at high speeds while making the information generated by MIMO more robust against radio distortion. “These features make it great for use in deploying telecommunications systems like 4G or 5G,” adds the researcher.  

In 2001, Gesbert moved to Norway, where he taught for two years as adjunct professor in the IT department at the University of Oslo. One year later, he published an article in which he described that complex propagation environments favor the functioning of MIMO. “This means that the more obstacles there are in a place, the more the waves generated by the antennas are reflected. The waves therefore travel different paths and interference is reduced, which leads to more efficient data transfer. In this way, an urban environment in which there are many buildings, cars, and other objects will be more favorable to MIMO than a deserted area,” explains the telecommunications expert.  

In 2003, he joined EURECOM, where he became a professor and five years later, head of the Mobile Communications department. There, he has continued his work aiming to improve MIMO. His research has shown him that base stations — also known as relay antennas — could be useful to improve the performance of this mechanism. By using antennas from multiple relay stations far apart from each other, it would be possible to make them work together and produce a giant MIMO system. This would help to eliminate interference problems and optimize the circulation of data streams. Research is still being performed at present to make this mechanism usable.

MIMO and robots

In 2015, Gesbert obtained an ERC Advanced Grant for his PERFUME project. The initiative, which takes its name from high PERfomance FUture Mobile nEtworking, is based on the observation that “the number of receivers used by humans and machines is currently rising. Over the next few years, these receivers will be increasingly connected to the network,” emphasizes the researcher. The aim of PERFUME is to exploit the information resources of receivers so that they work in cooperation, to improve their performance. The MIMO principle is at the heart of this project: spatializing information and using multiple channels to transmit data. To achieve this objective, Gesbert and his team developed base stations attached to drones. These prototypes use artificial intelligence systems to communicate between one another, in order to determine which bandwidth to use or where to place themselves to give a user optimal network access. Relay drones can also be used to extend radio range. This could be useful, for example, if someone is lost on a mountain, far from relay antennas, or in areas where a natural disaster has occurred and the network infrastructure has been destroyed.

As part of this project, the EURECOM professor and his team have performed research into decision-making algorithms. This has led them to develop artificial neuron networks to improve decision-making processes performed by the receivers or base stations desired to cooperate together. With these neuron networks, the devices are capable of quantifying and exploiting the information held by each of themAccording to Gesbert, “this will allow receivers or stations with more information to correct flaws in receivers with less. This idea is a key takeaway from the PERFUME project, which finished at the end of 2020. It indicates that to cooperate, agents like radio receivers or relay stations make decisions based on sound data, which sometimes has to be rejected to let themselves be guided by decisions from agents with access to better information than them. It is a surprising result, and a little counterintuitive.”

Towards the 6th generation of mobile telecommunications technology

“Nowadays, two major areas are being studied concerning the development of 6G,” announces Gesbert. The first relates to ways of making networks more energy efficient by reducing the number of times that transmissions take place, by restricting the amount of radio waves emitted and reducing interference. One solution to achieve these objectives is to use artificial intelligence. “This would make it possible to optimize resource allocation and use radio waves in the best way possible,” adds the expert.

The second concerns applications of radio waves for purposes other than communicating information. One possible use for the waves would be to produce images. Given that when a wave is transmitted, it reflects off a large number of obstacles, artificial intelligence could analyze its trajectory to identify the position of obstacles and establish a map of the receiver’s physical environment. This could, for example, help self-driving cars determine their environment in a more detailed way. With 5G, the target precision for locating a position is around a meter, but 6G could make it possible to establish centimeter-level precision, which is why these radio imaging techniques could be useful. While this 6th-generation mobile telecommunications network will have to tackle new challenges, such as the energy economy and high-accuracy positioning, it seems clear that communication spatialization and MIMO will continue to play a fundamental role.

Rémy Fauvel