applis mobiles, mobile apps

Do mobile apps for kids respect privacy rights?

The number of mobile applications for children is rapidly increasing. An entire market segment is taking shape to reach this target audience. Just like adults, the personal data issue applies to these younger audiences. Grazia Cecere, a researcher in the economics of privacy at Institut Mines-Télécom Business School, has studied the risk of infringing on children’s privacy rights. In this interview, she shares the findings from her research.

 

Why specifically study mobile applications for children?

Grazia Cecere: A report from the NGO Common Sense reveals that 98% of children under the age of 8 in the United States use a mobile device. They spend an average of 48 minutes per day on the device. That is huge, and digital stakeholders have understood this. They have developed a market specifically for kids. As a continuation of my research on the economics of privacy, I asked myself how the concept of personal data protection applied to this market. Several years ago, along with international researchers, I launched a project dedicated to these issues. The project was also launched thanks to funding from Vincent Lefrere’s thesis within the framework of the Futur & Ruptures program.

Do platforms consider children’s personal data differently than that of adults?

GC: First of all, children have a special status within the GDPR in Europe (General Data Protection Regulation). In the United States, specific legislation exists: COPPA (Children’s Online Privacy Protection Act). The FTC (Federal Trade Commission) handles all privacy issues related to users of digital services and pays close attention to children’s rights. As far as the platforms are concerned, Google Play and App Store both have Family and Children categories for children’s applications. Both Google and Apple have expressed their intention to separate these applications from those designed for adults or teens and ensure better privacy protection for the apps in these categories. In order for an app to be included in one of these categories, the developer must certify that it adheres to certain rules.

Is this really the case? Do apps in children’s categories respect privacy rights more than other applications?

GC: We conducted research to answer that question. We collected data from Google Play on over 10,000 mobile applications for children, both within and outside the category. Some apps choose not to certify and instead use keywords to target children. We check if the app collects telephone numbers, location, usage data, and whether they access other information on the telephone. We then compare the different apps. Our results showed that, on average, the applications in the children’s category collect fewer personal data and respect users’ privacy more than those targeting the same audience outside the category. We can therefore conclude that, on average, the platforms’ categories specifically dedicated to children reduce the collection of data. On the other hand, our study also showed that a substantial portion of the apps in these categories collect sensitive data.

Do all developers play by the rules when it comes to protecting children’s personal data?

GC: App markets ask developers to provide their location. Based on this geographical data, we searched to see whether an application’s country of origin influenced its degree of respect for users’ privacy. We demonstrated that if the developer is located in a country with strong personal data regulations—such as the EU, the United States and Canada—it generally respects user privacy more than a developer based in a country with weak regulation. In addition, developers who choose not to provide their location are generally those who collect more sensitive data.

Are these results surprising?

GC: In a sense, yes, because we expected the app market to play a role in respecting personal data. These results raise the question of the extra-territorial scope of the GDPR, for example. In theory, whether an application is developed in France or in India, if it is marked in Europe, it must respect the GDPR. However, our results show that among countries with a weak regulation, the weight of the legislation in the destination market is not enough to change the developers’ local practices. I must emphasize that offering an app to all countries is extremely easy—it is even encouraged by the platforms, which makes it even more important to pay special attention to this issue.

What does this mean for children’s privacy rights?

GC: The developers are the owners of the data. Once personal data is collected by the app, it is sent to the developer’s servers, generally in the country where they are located. The fact that foreign developers pay less attention to protecting users’ privacy means that the processing of this data is probably also less respectful of this principle.

 

robots

Robots on their best behavior in the factory of the future

A shorter version of this article was published in the monthly magazine Acteurs du franco-allemand, as part of an editorial partnership.

[divider style=”normal” top=”20″ bottom=”20″]

Robots must learn to communicate better if they want to earn their spot in the factory of the future. This will be a necessary step in ensuring the autonomy and flexibility of production systems. This issue is the focus of the German-French Academy for the Industry of the Future’s SCHEIF project. More specifically, researchers must choose appropriate forms of communication technology and determine how to best organize the transmission of information in a complex environment.

 

The industry system is monolithic for robots. They are static, and specialized for a single task, but it is impossible for us to change their specialization based on the environment.” This observation was the starting point for the SCHEIF[1] project. SCHEIF, conducted in the framework of the German-French Academy for the Industry of the Future, seeks to allow robots to adapt more easily to function changes. To achieve this, the project brings together researchers from EURECOM, the Technical University of Munich (TUM) and IMT Atlantique. The researchers’ goal is “to create a ‘plug and play’ robot that can be deployed anywhere, easily understand its environment, and quickly interact with humans and other robots,” explains Jérôme Härri, a communications researcher with EURECOM participating in this project.

The robots’ communication capacities are particularly critical in achieving this goal. In order to adapt, they must be able to effectively obtain information.  The machines must also be able to communicate their actions to other agents—both humans and robots—in order to integrate into their environment without disruption. Without these aspects, there can be no coordination and therefore no flexibility.

This is precisely one of the major challenges of the SCHEIF project, since the industrial environment imposes numerous constraints on machine communications. They must be fast in the event of an emergency, and flexible enough to prioritize information based on its importance for production chain safety and effectiveness. They must also be reliable, given the sensitivity of the information transmitted. The machines must also be able to communicate over the distances of large factories, not just a few meters. They must combine speed, transmission range, adaptability and security.

Solving the technology puzzle

The solution cannot be found in a single technology,” Jérôme Härri emphasizes. Sensor technology, for example, like Sigfox and LoRa, which are dedicated to connected objects, have high reliability and a long range, but cannot directly communicate with each other. “There must be a supervisor in charge of the interface, but if it breaks down, it becomes problematic, and this affects the robustness criterion for the communications,” the researcher adds. “Furthermore, this data generally returns to the operator of the network base stations, and the industrialist must subscribe to a service in order to obtain it.

On the other hand, 4G provides the reliability and range, but not necessarily the speed and adaptability needed for the industry of the future. As for 5G, it provides the required speed and offers the possibility of proprietary systems. This would free industrialists from the need to go through an operator. However, its reliability in an industrial context is still under specification.

Faced with this puzzle, two main approaches emerge. The first is based on increasing the interoperability and speed of sensor technology. The second is based on expanding 5G to meet industrial needs, particularly by providing it with features similar to those of sensor technologies.  The researchers chose this second option. “We are improving 5G protocols by examining how to allocate the network’s resources in order to increase reliability and flexibility,” says Jérôme Härri.

To achieve this, the teams of French and German researchers can draw on extensive experience in vehicular communication, which uses 4G and 5G networks to solve transport and mobility issues. The cellular technology used for vehicles has the advantage of featuring a cooperative scheduling specification. This information system feature decides who should communicate a message and at what time. A cooperative scheduler is essential for fleets of vehicles on a highway, just like fleets of robots used in a factory. It ensures that all robots follow the same rules of priority. For example, thanks to the scheduler, information that is urgent for one robot is also urgent for the others, and all the machines can react to free the network from traffic and prioritize this information. “One of our current tasks is to develop a cooperative scheduler for 5G adapted to robots in an industrial context,” explains Jérôme Härri.

Deep learning for added flexibility

Although the machines can rely on a scheduler to know when to communicate, they still must know which rules to follow. The goal of the scheduler is to bring order to the network, to prevent network saturation, for example, and collisions between data packets. However, it cannot determine whether or not to authorize a communication solely by taking communication channel load into account. This approach would mean blindly communicating information: a message would be sent when space is available, without any knowledge of what the other robots will do. Yet in critical networks, the goal is to plan for the medium term in order to guarantee reliability and reaction times. When robots move, the environment changes. It must therefore be possible to predict whether all the robots will start suddenly communicating in a few seconds, or if there will be very few messages.

Deep learning is the tool of choice for teaching networks and machines how to anticipate these types of circumstances. “We let them learn how several moving objects communicate by using mobility datasets. They will then be able to recognize similar situations in their actual use and will know the consequences that can arise in terms of channel quality, or number of messages sent,” the researcher explains. “It is sometimes difficult to ensure learning datasets will match the actual situations the network will face in the future. We must therefore add additional learning on the fly during use. Each decision taken is analyzed. System decisions therefore improve over time.

The initial results on this use of deep learning to optimize the network have been published by the teams from EURECOM and Technical University of Munich. The researchers have succeeded in organizing communication between autonomous mobile agents in order to prevent the collision of the transmitted data packets. “More importantly, we were able to accomplish this without each robot being notified of whether the others would communicate,” Jérôme Härri adds. “We succeeded in allowing one agent to anticipate when the others will communicate based solely on behavior that preceded communication in the past.

The researchers intend to pursue their efforts by increasing the complexity of their experiments to make them more like actual situations that occur in industrial contexts. The more agents, the more the behavior becomes erratic and difficult to predict. The challenge is therefore to enable cooperative learning. This would be a further step towards fully autonomous industrial environments.

[1] SCHEIF is an acronym for Smart Cyber-physical Environments for Industry of the Future.

 

Facebook

Fine against Facebook: How the American FTC transformed itself into “super CNIL”

Article written in partnership with The Conversation
Winston Maxwell, Télécom Paris – Institut Mines-Télécom

[divider style=”normal” top=”20″ bottom=”20″]

[dropcap]T[/dropcap]he US consumer protection regulator has issued a record $5 billion fine to Facebook for personal data violations. This fine is by far the largest ever issued for a personal data violation. Despite some members of the US Congress saying that this is not enough, the sanction has allowed the FTC to become the most powerful personal data protection regulator in the world. And yet, the USA does not have a general data protection law.

To transform itself into a “super CNIL”, the American FTC relied on a 1914 text on consumer protection that forbids any unfair or deceptive practices in business. France has a similar law in its Consumer Code. In the USA, there is no equivalent to CNIL on a federal level. Therefore, the FTC have taken this role.

It was not easy for the FTC to transform a general text on consumer protection into a law on personal data protection. The organization faced two obstacles. Firstly, they had to create a legal doctrine that was clear enough for businesses to understand what constitutes an unfair and deceptive practice in terms of personal data. Then, they had to find a way to impose financial sanctions, since the 1914 FTC act did not include any.

Proceedings against Facebook

A Facebook office. Earthworm/Flickr, CC BY-NC-SA

To clearly define a deceptive personal data practice, the FTC created a legal doctrine that punishes any business that “fails to keep their promises” in terms of personal data. The FTC had started a first lawsuit against Facebook in 2011 accusing them of deceptive practices due to the discrepancy between what Facebook told consumers and how the company acted. To spot a deceptive practice, the FTC will examine each sentence of a company’s privacy policy to identify a promise, even an implied promise, that is not being kept.

An unfair practice is more difficult to prove, which explains why the FTC prefer to use the term ‘deceptive’ rather than unfair. The FTC considers an unfair practice to be any practice that would be both surprising and not easily avoidable for the average consumer.

The FTC Act does not allow the FTC to directly impose a financial penalty. To do this, they have to ask the US Department of Justice to file a lawsuit. To work around this issue, the FTC encourages settlement agreements. The FTC Act allows the regulator to directly impose sanctions in the event of a breach of these agreements. The most important thing for the FTC is to get an agreement signed at the time of the company’s first violation. This means that in the case of a second violation, the FTC is in a position of strength. The Facebook incident follows this pattern. Facebook signed a settlement agreement with the FTC in 2012. The FTC have now found that Facebook violated this agreement by sharing personal data with Cambridge Analytica. The violation of the agreement made in 2012 allows the FTC to hit back strongly and negotiate a new agreement that will last 20 years, this time with a $5 billion fine.

Settlement agreements

If settlement agreements allow the FTC to increase its powers, why do companies sign them? Companies put themselves in a weaker position by signing settlement agreements and the contract prepares the FTC to make these companies more vulnerable in the case of a second violation.  However, most companies prefer to negotiate an agreement with the FTC instead of going to trial. As well as the large cost of a lawsuit and the negative effect it has on a company’s image, if a company loses a lawsuit to the US government, the door is then opened for other parties to sue them, in particular with consumer class action lawsuits. Companies fear the snowball effect. In addition, a settlement agreement with the FTC does not set a precedent since the company does not admit that they are guilty in the agreement. This means that the company can claim their innocence in other lawsuits.

As well as increasing the FTC’s sanctioning powers, the settlement agreements allow them to establish detailed requirements for personal data protection. A settlement agreement with the FTC can become a mini-GDPR and binds the company for 20 years, which is the usual duration for these agreements.

The new agreement states that Facebook must gain the explicit consent of the user before they use their facial recognition data for any purpose, or before they share their mobile phone number with a third party. The 2012 agreement already required Facebook to carry out impact assessments and this obligation was reinforced in the 2019 agreement. The new agreement requires Facebook to put in place a committee of independent administrators who will manage the implementation of the agreement within the company. As well as this, Facebook’s status will have to be changed to ensure that Mark Zuckerberg is not the sole person who can dismiss those in charge of managing the obligations. The new agreement requires Mark Zuckerberg to sign a personal declaration stating that the company will comply with the commitments made in the agreement. A false declaration would put Mr Zuckerberg at risk of criminal penalties, including imprisonment. Most importantly, the agreements oblige Facebook to document all its risk reduction measures and carry out an audit every two years using an independent auditor.

The 2012 act already included a biannual audit. Following the Cambridge Analytica investigation, the EPIC association was provided a copy of an audit carried out for the 2015-2017 period. The audit did not identify any abnormalities relating to data sharing with Cambridge Analytica and other Facebook business partners. This caused the FTC to question the effectiveness of audits, leading them to strengthen the audit regulations in the new 2019 agreement.

Although the 2012 settlement agreement did not prevent Facebook from crossing the line in the Cambridge Analytica scandal, it did allow the FTC to strongly intervene and sanction this second violation. As well as the $5 billion fine, the new 2019 agreement contains several accountability measures. These measures ensure that the commitments agreed by Facebook are applied at every level of the company and that any violation will be detected quickly. Facebook’s management will not be able to say that they were not made aware and Facebook will have to adhere to these governance commitments for the next 20 years.

In the USA, it is common for companies to negotiate agreements with the government. This process is sometimes criticized as a form of forced negotiation. The $8.9 billion fine against BNP Paribas was a “negotiated” agreement, although whether the negotiation between the French bank and the US government was balanced is questionable. In Europe, there are no settlement agreements for personal data violations, but they are common in competition law.

[divider style=”dotted” top=”20″ bottom=”20″]

Winston Maxwell, Télécom Paris – Institut Mines-Télécom

The original version of this article (in French) was published in The Conversation. Read the original article

lentille autonome, autonomous contact lens

An autonomous contact lens to improve human vision

Two teams from IMT Atlantique and Mines Saint-Étienne have developed an autonomous contact lens which is powered by an integrated flexible micro-battery. This invention is a world first that opens new health prospects, whilst also opening the door for scientists to develop other human-machine interfaces.

 

Human augmentation, a field of research that aims to enhance human abilities through technological progress, has always fascinated authors of science fiction. It is a recurring theme in the British TV series Black Mirror and The Six Million Dollar Man. But beyond dystopia and action adventure, it also interests science. The recent findings from teams at IMT Atlantique and Mines Saint Étienne are one of the latest examples of this.

Jean-Louis de Bougrenet is the head of the Optics Department at IMT Atlantique. Thierry Djenizian is the head the Flexible Electronics Department at the Centre Microéléctronique de Provence at Mines Saint Étienne. Together, the two scientists have achieved a world first; they developed the cyborg lens, an autonomous contact lens with a built-in flexible micro-battery.

The origins of the cyborg lens

In the medical division of his department, Jean-Louis de Bougrenet was working on devices to improve people’s vision. Whilst doing this, the researcher and his team used oculometers, instruments used to analyze how eyes behave, measure an individual’s fatigue or stress, and also to see the direction of their gaze. These devices are used in technology such as VR headsets which can use the direction of someone’s gaze or when they blink to make a command.

However, to be truly efficient, the oculometer must be able to do two things. Firstly, it has to be extremely precise. Secondly, to make sure the VR headset does not contain any additional components that weigh the user down, the oculometer has to be as light as possible. These factors made it clear to the scientists that the oculometer had to be placed directly in the user’s eye. “The contact lens very quickly emerged as a way to make human augmentation possible, since it allowed humans to carry a smart device directly on them,” explains the optical researcher. This system is made possible by advances in nanotechnology.

Thierry Djenizian has spent the last four years working on integrating electric components onto flexible and stretchable surfaces. His research led to the patenting of a flexible micro-battery. However, this device wasn’t originally made to be used on a contact lens.

Read on I’MTech: Towards a new generation of lithium batteries?

After becoming interested in Thierry Djenizian’s work in flexible electronics, Jean-Louis de Bougrenet got in touch with his colleague at Mines Saint-Étienne. During his visit to the Centre Microélectronique de Provence à Gardanne (Bouches-du-Rhône), he observed the advances made in flexible micro-batteries. This led to the idea to integrate this little device directly into the cyborg contact lenses developed at IMT Atlantique, which was extremely innovative, since the scientists were awarded a joint patent for their work.

A flexible micro-battery directly placed in a contact lens

The device is a world first, as an energy storage source is directly incorporated into the small ocular device. “Whenever functions are performed locally by an autonomous system, the system must have energy autonomy,” explains Jean-Louis de Bougrenet.  Until now, ‘smart’ contact lenses have been powered by an external energy source such as a magnetic induction system, using a coil placed in the device. The problem with this process is that, if the energy source is cut, the device no longer works, something that is now no longer the case thanks to this new innovation. In the device developed by the two scientists, the lens is instead powered by a micro-battery that can also be paired to an external source to charge it, or in case it needs to use more energy.

Thierry Djenizian’s aim was to apply results he had already obtained from his previous studies to the design and performance constraints of an ocular device.  This meant that he built on his previous work, which was mainly based on innovation and design.

Normally, flexible batteries are made up of electrodes connected to each other by ductile coils which then carry current.  Our device uses the entire surface area occupied by the coils by carrying microelectrodes directly on these wires,” explains the researcher at Mines Saint-Étienne. In practice, during the manufacture of flexible batteries, electrodes made from several composite materials are placed on an aluminum foil and shaped into vertical ‘micropillars’ using laser ablation with regular spacing. For the lenses, the same technique is used to manufacture the coils that support microelectrodes, giving the battery great flexibility.

 

3D illustration of the flexible coils which carry the microelectrodes.

 

As well as this, the scientists aim to use innovative materials to improve the performance of the device. One example of this is for the electrolyte, which separates the battery’s two electrodes.  The polymers that are currently used will eventually be replaced by other materials with a self-repairing nature. This will offset the strain put on the battery when the device is being charged.

Now, the scientists have succeeded in making a battery that is 0.75cm² and integrated into a scleral contact lens. This lens rests on the ‘white’ of the eye (the sclera) and is both bigger and more stable than a standard contact lens. To make the device, the area of the lens which is in front of the iris is removed and replaced with microelectronic components, including the energy storage device. This method has the distinct advantage of not obstructing the wearer’s field of vision, as light can always enter the eye through the pupil free of any obstacle. The micro-battery has already proved its worth as it can power an LED for several hours. “An LED is a textbook example since it is generally the most energy-intensive microelectronic device,” says Thierry Djenizian.

Already, there are many ways to improve this technology

While the current device is already ground-breaking, the two researchers continue to try to perfect it. The priority for the Flexible Electronics Department at Mines Saint-Étienne is optimizing the system, as well as improving its reliability. “From a technological perspective, we still have a lot of work to do,” states the head of department. “We have the concept, but improving the device is similar to taking a CRT television and turning it into a modern flat screen TV.”

The next step has already been decided. The scientists want to develop miniature antennae in order to charge the battery and make the lens a communicative device, which will allow it to transmit information from the oculometer. Another option that is currently being studied uses an infrared laser to follow the user’s eyes. This laser would be activated by blinking and would show the direction that the user is looking.

Assistance for people with bad eyesight and professional uses

According to Jean-Louis Bougrenet, the innovation will allow them to “take localized engineering to the next level.” The project has a wide range of potential uses, including helping visually impaired people. The scientists have paired up with the Institut de la Vision with the aim of developing a device which can improve the sensory abilities of visually impaired people. As well as this, the cyborg lens could be used in VR headsets as a way of making visual commands.  Discussions have already started with key people in this industry.

In the future, the lenses could have several other applications. In the automotive industry, for example, they could be used to monitor the drivers’ attention or level of fatigue.  However, “at the moment we are only discussing how the lenses could be used professionally, or for people with disabilities. If we make the lenses available to the general public, for example to be used when driving, then we raise issues that go far beyond the technical aspects of the device. This is because it involves people’s consent to wear the device, which is not a trivial matter,” states Jean-Louis de Boygrenet.

Even if the cyborg lens can help humans, there is still some way to go before the device can be seen in an entirely positive light.

This article was written (in French) by Bastien Contreras for I’MTech

 

quantique, quantum technology

20 terms for understanding quantum technology

Quantum mechanics is central to much of the technology we use every day. But what exactly is it? The 11th Fondation Mines-Télécom booklet explores the origins of quantum technology, revealing its practical applications by offering a better understanding of the issues. To clarify the concepts addressed, the booklet includes a glossary, from which this list is taken.

 

Black-body radiation – Thermal radiation of an ideal object absorbing all the electromagnetic energy it receives.

Bra-ket notation (from the word bracket) – Formalism that facilitates the writing of equations in quantum mechanics.

Coherent detectors – Equipment used to detect photons based on amplitude and the phase of the electromagnetic signal rather than interactions with other particles.

Decoherence – Each possibility of a quantum superposition state interacts with its environment at a degree of complexity that makes the different possibilities incoherent and unobservable.

Entanglement – Phenomenon in which two quantum systems present quantum states that are dependent on one another, regardless of the distance separating them.

Locality (principle of) – The idea that two distant objects cannot directly influence each other.

Momentum – Product of the mass and velocity vector of a hypothetical object in time.

NISQ (Noisy Intermediate-Scale Quantum) – Current class of quantum computers

Observable (noun) – Concept in the quantum world comparable to a physical value (position, momentum, etc.) in the classical world.

Quanta – The smallest indivisible unit (of energy, momentum, etc.)

Quantum Hall effect – classical Hall effect refers to the phenomenon of voltage created by an electric current flowing through material immersed in a magnetic field. According to the conditions, this voltage increases in increments. This is the quantum Hall effect.

Quantum state – A concept that differs from a classical physical system, in which measured physical values like position and speed are sufficient in defining the system. A quantum state provides a probability distribution for each observable of the quantum system to which it refers.

Quantum system – Refers to an object studied in a context in which its quantum properties are interesting, such as a photon, mass of particles, etc.

Qubit – Refers to a quantum system in which a given observable (the spin for example) is the superposition of two independent quantum states.

Spin – Like the electric charge, one of the properties of particles.

Superposition principle – Principle that a same quantum state can have several values for one of its given observables.

The Schrödinger wave function – A fundamental concept of quantum mechanics, a mathematical function representing the quantum state of a quantum system.

Uncertainty Principle – Mathematical inequality that expresses a fundamental limit to the level of precision with which two physical properties of a same particle can be simultaneously known.

Wave function collapse – Fundamental concept of quantum mechanics that states that after a measurement, a quantum system’s state is reduced to what was measured.

Wave-particle duality (or wave-corpuscle duality) – The principle that a physical object sometimes has wave properties and sometimes corpuscular properties.

Also read on I’MTech

CloudButton

CloudButton: Big Data in one click

Projets européens H2020Launched in January 2019 for a three-year period, the European H2020 project CloudButton seeks to democratize Big Data by drastically simplifying its programming model. To achieve this, the project relies on a new cloud service that frees the final customer from having to physically manage servers. Pierre Sutra, researcher at Télécom SudParis, one the CloudButton partner, shares his perspective on the project.

 

What is the purpose of the project?

Pierre Sutra: Modern computer architectures are massively distributed across machines and a single click can require the computations from tens to hundreds of servers. However, it is very difficult to build this type of system, since it requires linking together many heterogeneous components. The key objective of CloudButton is to radically simplify this approach to programming.

How do you intend to do this? 

PS: To accomplish this feat, the project builds on a recent concept that will profoundly change computer architectures: Function-as-a-Service (FaaS). FaaS makes it possible to invoke a function in the cloud on-demand, as if it was a local computation. Since it uses the cloud, a huge number of functions can be invoked concurrently, and only the usage is charged—with millisecond precision. It is a little like having your own supercomputer on demand.

Where did the idea for the CloudButton project come from?

PS: The idea came from a discussion with colleagues from the Spanish university Rovira i Virgili (URV) during the 2017 ICDCS in Atlanta (International Conference on Distributed Computing Systems). We had just presented a new storage layer for programming distributed systems. This layer was attractive, yet it lacked an application that would make it a true technological novelty. At the time, the University of Berkeley offered an approach for writing parallel applications on top of FaaS. We agreed that this was what we needed to move forward. It would allow us to use our storage system with the ultimate goal of moving single-computer applications to the cloud with minimal effort. The button metaphor illustrates this concept.

Who are your partners in this project?

PS: The consortium brings together five academic partners: URV (Tarragona, Spain), Imperial College (London, UK), EMBL (European Molecular Biology Laboratory, Heidelberg, Germany), The Pirbright Institute (Surrey, UK) and IMT, and several industrial partners, including IBM and RedHat. The institutes specializing in genomics (The Pirbright Institute) and molecular biology (EMBL) will be the end users of the software. They also provide us with new use cases and issues.

Can you give us an example of a use case?

PS: EMBL offers its associate researchers access to a large bank of images from around the world. These images are stamped with information on the subject’s chemical composition by combining artificial intelligence and the expertise of EMBL researchers. For now, the system must calculate the stamps in advance. A use case for CloudButton would be for these computations to be performed on-demand, to customize user requests, for example.

How are Télécom SudParis researchers contributing to this project?

PS: Télécom SudParis is working on the storage layer for CloudButton. The goal is to design programming abstractions that are as similar as possible to what standard programming languages are. Of course, these abstractions must also be effective for the FaaS delivery model. This research is being conducted in collaboration with IBM and RedHat.

What technological and scientific challenges are you facing?

PS: In its current state, storage systems are not designed to handle massively parallel computations over a short period of time. The first challenge is therefore to adapt storage to the FaaS model. The second challenge is to reduce the synchronization between parallel tasks to a strict minimum in order to maximize performance. The third challenge is fault tolerance. Since the computations run on large-scale infrastructure, this infrastructure is regularly subject to errors. However, the faults must be hidden in order to display a simplified programming interface.

What are the expected benefits of this project?

PS: The success of a project like CloudButton can take several forms. Our first goal is to allow the institutes and companies involved in the project to resolve their computing and big data issues. On the other hand, the software we are developing could also meet with success among the open source community. Finally, we hope that this project will produce new design principles for computer system architectures that will be useful in the long run.

What are the important next steps in this project?

PS: We will meet with the European Commission one year from now for a mid-term assessment. So far, the prototypes and applications we have developed are encouraging. By then, I hope we will be able to present an ambitious computing platform based on an innovative use case.

 

[divider style=”normal” top=”20″ bottom=”20″]

The CloudButton consortium partners

data brokers

Data brokers: the middlemen running the markets

Over the past 5 years, major digital technology stakeholders have boosted the data broker business. These middlemen collect and combine masses of traces that consumers leave online. They then offer them to the companies of their choice in exchange for income. Above all, they use this capital to manipulate markets around the world. These new powerful stakeholders are greatly misunderstood. Patrick Waelbroeck, an economist at Télécom Paris, studies this phenomenon in the context of the Chair he cofounded dedicated to Values and Policies of Personal Information.

 

Data brokers have existed since the 1970s and the dawn of direct marketing. These data middlemen collect, sort and prepare data from consumers for companies in need of market analysis. But since the advent of the Web, data brokers like Acxiom, Epsilon and Quantum have professionalized this activity. Unlike their predecessors, they are the ones who choose the partners to whom they will sell the information. They employ tens of thousands of individuals, with turnover sometimes exceeding 1 billion dollars.

As early as 2015, in an article entitled The Black Box Society, Franck Pasquale, a law professor at the University of Maryland, identified over 4,000 data brokers in a 156-billion-dollar market. In 2014, according to the American Federal Trade Commission (FTC), one of these companies held information on 1.4 billion transactions carried out by American consumers, and over 700 billion aggregate items!

Yet these staggering figures are already dated, since technology giants have joined the data broker game over the past five years. Still, “economists are taking no notice of the issue and do not understand it,” says Patrick Waelbroeck, professor of industrial economics and econometrics at Télécom Paris. In the context of the IMT Chair Values and Policies of Personal Information, he specifically studies the effect of data brokers on fair competition and the overall economy.

Opaque activities

There are supply and demand dynamics, companies that buy, collect, modulate and build databases and sell them in the form of targeted market segments based on the customer’s needs,” the researcher adds. Technology giants have long understood that personal data is of little value on its own. A data broker’s activities entail not only finding and collecting data on or offline. More importantly, they must combine it to describe increasingly targeted market segments.

5 years ago, the FTC already estimated that some data brokers held over 3,000 categories of information on each American, from first and last names, addresses, occupations and family situations to intentions to purchase a car and wedding plans. But unlike “native” data brokers, technology giants do not sell this high value-added information directly. They exchange it for services and compensation. We know nothing about these transactions and activities, and it is impossible to measure their significance.

A market manipulation tool

One of the key messages from our research has been that these data brokers, and digital technology giants in especially, do not only collect data to sell or exchange,” says Patrick Waelbroeck. “They use it to alter market competition.” They are able to finely identify market potential for a company or a product anywhere in the world, giving them extraordinary leverage.

Imagine, for example, a small stakeholder who has the monopoly on a market in China,” says the economist. “A data broker who has data analysis indicating an interest in this company’s market segment for a Microsoft or Oracle product, for example, has the power to disrupt this competitive arena. For a variety of reasons—the interest of a customer, disruption of a competitor, etc.—they can sell the information to one of the major software companies to support them or, on the other hand, decide to support a Chinese company instead.

As a practical example of this power, in 2018, British Parliament revealed internal emails from Facebook. The conversations suggest that the Californian company may have favored third-party applications such as Netflix by sharing certain market data, while limiting access to smaller applications like Vine. “In economics, this is called a spillover effect on other markets,” Patrick Waelbroeck explains. “By selling more or less data to certain market competitors, data brokers can make the market more or less competitive and choose to favor or disadvantage a given stakeholder. ”

In a traditional market, the interaction between supply and demand introduces a natural form of self-regulation. In choosing one brand rather than another, the consumer exercises countervailing power. Internet users could do the same. But digital market mechanics are so difficult to understand that there are no users doing this. Although users regularly leave Facebook to prevent it from invading their privacy, it is unlikely they will do the same to prevent the social network from distorting competition by selling their data.

Data neutrality?

One of our Chair’s key messages is the observation of a total ignorance of the influence of data brokers,” Patrick Waelbroeck continues. “No one is pondering this issue of data brokers manipulating market competition. Not even regulators. Yet existing mechanisms could be used as a source of inspiration in countering this phenomenon.” The concept of net neutrality, for example, which in theory enables everyone to have the same access to all online services, could inspire data neutrality. It would prevent certain data brokers or digital stakeholders from deciding to favor certain companies over others by providing them with their data analysis.

Read more on IMTech What is Net Neutrality?

Another source of inspiration for regulation is the natural resource market. Some resources are considered as common goods. If only a limited number of people have access to a natural resource, competition is distorted, and the rejection of a commercial transaction can be sanctioned. Finally, an equivalent measure for intellectual property rights could be applied to data. Certain patents, which are necessary in complying with a standard, are regarded as raw materials and are therefore protected. The companies holding these “essential patents” are required by regulation to grant a license to all who want to use them at a reasonable and non-discriminatory rate.

The value of the data involved in digital mergers and acquisitions

In the meantime, pending regulation, the lack of knowledge about data brokers among competition authorities is leading to dangerous collateral damage. Unaware of the true value of certain mergers and acquisitions, like those between Google and DoubleClick, WhatsApp and Facebook, or Microsoft and LinkedIn, competition authorities use a traditional market analysis approach.

They see the two companies as belonging to different markets–for example WhatsApp as an instant messaging service and Facebook a social network–and in general conclude that they would not gain any market power in joining forces than they had individually. “That is entirely false!”, Patrick Waelbroeck protests. “They are absolutely in the same sector, that of data brokerage. After the union of these duos, they all merged their user databases and increased the number of their users. ”

 We must view the digital world through a new lens,” the researcher concludes. “All of us–economists, regulators, politicians and citizens–must understand this new data economy and its significant influence on markets and competition. In fact, in the long-term, all companies, even the most traditional ones, will be data brokers. Those unable to follow suit may well disappear. ”

Article by Emmanuelle Bouyeux for I’MTech

 

servitization

Servitization of products: towards a value-creating economy

Businesses are increasingly turning towards selling the use of their products. This shift in business models affects SMEs and major corporations alike. In practice, this has an impact on all aspects of a company’s organization, from its design chain to developing collaborations, to rolling out new offerings for customers. Xavier Boucher and his colleagues, researchers in industrial systems design and optimization at Mines Saint-Étienne, help companies navigate this transformation.  

 

Selling uses instead of products. This shift in the business world towards a service economy has been emerging since the early 2010s. It is based on new offerings in which the product is integrated within a service, with the aim of increasing value creation. Leading manufacturers, such as Michelin, are at the forefront of this movement. With its Michelin Fleet Solutions, the company has transitioned from selling tires to selling kilometers to European commercial trucking fleets. But the trend also increasingly affects SMEs, especially as it is recognized as having many benefits including new opportunities to create value and drive growth, positive environmental impacts, building customer loyalty, increasing employee motivation and involvement.

However, such a transition is not easy to implement and requires a long-term strategy. What innovation strategies are necessary? What services should be rolled out and how? What company structures and expertise must be put in place? It all depends on market developments, the economic impacts of such a transformation on a company and the means to implement it, whether alone or with partners, to achieve a sustainable transformation. More generally, shifting a company’s focus to a product-service system means changing its business model. With his team, Xavier Boucher, a researcher at Mines Saint-Étienne, supports companies in this shift.

In the Rhône-Alpes region where he carries out his research, the majority of manufacturers are developing a service dimension to varying degrees through logistics or maintenance activities. “But out of the 150,000 companies in the region, only a few hundred have truly shifted their focus to selling services and to product life-cycle management,” explains the researcher. Nevertheless, his team is facing increasing demand from manufacturers.

Tailored support for companies

The transition from a model of selling products to a product-service system involves a number of issues of company organization, reconfiguration of the production chain and customer relationship management, which the researchers analyze using models. After a diagnostic phase, the goal is often to support a company with its transformation plan. The first step is changing how a product is designed. “When we design a product, we have to consider all the transformations that will make it possible to develop services throughout its use and throughout all the other phases of its life cycle,” explains Xavier Boucher. As such, it is often useful to equip a product with sensors so that its performance and life cycle can be traced when in customers’ possession. But production management is also impacted: this business strategy is part of a broader context of agility. The goal? Create value that is continually evolving through flexible and reconfigurable industrial processes in alignment with this purpose.

To this end, Xavier Boucher’s team develops different tools ranging from strategic analysis to decision support tools to bring a solution to market. “For example, we’ve created a business model that can be used while developing a new service offering to determine the right value creation chain to put in place and the best way for the company to sell the service,” says the researcher. Using a generic simulation platform and a customization approach, the researchers tailor these economic  calculators to manufacturers’ specific circumstances.

This is important since each situation is unique and requires a tailored business model. Indeed, marketing a mobile phone and deploying a cleaning robot will not rely on the same channels of action. The latter will call for services including customized installation for customers, maintenance and upgradability as well as management of consumables and measuring and guaranteeing cleaning quality. Moreover, companies vary in terms of their progress toward servitization. The researchers may collaborate with a start-up that has adopted a product-service model from the outset or with companies with established business models looking for a tailored, long-term transformation.

What are the steps toward a product-service system?

Companies may call upon the expertise of the Mines Saint-Étienne researchers at various stages in their progress toward this transition. For example, a manufacturer may be considering in advance how selling a service would impact its economic balance. Would such a change be the right move based on its situation? In this case, the models establish a progressive trajectory for its transformation and break it down into steps.

Another type of collaboration may be developed with a company who is ready to move towards selling services and is debating how to carry out its initial offering. Researchers use their simulation tools to determine three possible business models: the first is to market its product and add on the sale of services throughout its lifecycle; the second is to shift the company’s business to selling services and integrate the product within a service; and finally, the third model sells performance to customers.

The researchers helped the SME Innovtec develop an autonomous robot offering for industrial cleaning. “We developed computer-aided design tools: modeling, organizational scenarios, simulations. The goal was to expand the traditional product-oriented tools by adding a service package dimension,” explains Xavier Boucher. The company thus benefitted from different scenarios: identifying technologies to ensure its robots’ performance, determining which services were appropriate for this new product etc. But the projections also address topics beyond the production chain, such as whether it should integrate the new services within the current company or create a new legal structure to deploy them.

A final possibility is a company that has already made the transition to servitization but is seeking to go further, as is the case for Clextral, a SME that produces extrusion machines used by the food processing industry, which was supported through the European DiGiFoF project (Digital Skills for Factories of the Future). Its machines have a long service life and provide an opportunity to create value through maintenance and upgrading operations. The researchers have therefore identified a service development focus based on a retrofitting service, a sort of technical upgrade. This consists of exchanging obsolete parts while maintaining a machine’s configuration, or modifying the configuration of a piece of equipment to allow for a different industrial use than originally intended.

Digitization and risk management in multi-stakeholder context

The current trend towards servitization has been made possible by the digitization of companies. The Internet of Things has enabled companies to monitor their machines’ performance. In several years’ time, it may even be possible to fully automate the monitoring process, from ordering spare parts to scheduling on-site intervention. Smart product-service systems to combine digitization and servitization is a research focus and is a central part of the work carried out with elm.leblanc, a company seeking to put in place a real-time information processing to respond to customers more quickly.

However, this change in business models affects not only a company, but its entire ecosystem.  For example, elm.leblanc is considering sharing costs and risks between various stakeholders. One option, for example, would be to incorporate partner companies to implement this service. But how would the economic value or brand image be distributed between the partners without them taking over the company’s market? Research on managing risk and uncertainty is of key importance for Xavier Boucher’s team. “One of the challenges of our work is the number of potential failures for companies, due to the difficulties of effectively managing the transition. Although servitization has clearly been shown to be a path to the future, it is not associated with immediate, consistent economic success. It’s essential to anticipate challenges.”

Article written (in French) by Anaïs Culot for I’MTech

partage de données, data sharing

Data sharing: an important issue for the agricultural sector

Agriculture is among the sectors most affected by digital transition, given the amount of data it possesses. But for the industry to benefit from its full potential, it must be able to find a sound business model for sharing this data. Anne-Sophie Taillandier, the Director of TeraLab — IMT’s big data and AI platform — outlines the digital challenges facing this sector in five answers.

 

How important of an issue is data in the agricultural sector?

Anne-Sophie Taillandier: It’s one of the most data-intensive sectors and has been for a long time. This data comes from tools used by farmers, agricultural cooperatives and distribution operations, up to the border with the agrifood industry behind it. Data is therefore found at every step. It’s an extremely competitive industry, so the economic stakes for using data are huge.

How can this great quantity of data in the sector be explained?

AST: Agriculture has used sensors for a long time. The earliest IoT (Internet of Things) systems were dedicated to collecting weather data, and were therefore quickly used in farming to make forecasts. And tractors are state-of-the-art vehicles in terms of intelligence – they were among the earliest autonomous vehicles. Farms also use drones to  survey land. And precision agriculture is based on satellite imagery to optimize harvests while using as few resources as possible. On the livestock farming side, infrastructures also have a wealth of data about the quality and health of animals. And all of these examples only have to do with the production portion.

What challenges does agriculture face in relation to data?

AST: The tricky question is determining who has access to which data and in what context. These data sharing issues arise in other sectors too, but there are scientific hurdles that are specific to agriculture. The data is heterogeneous: it comes from satellites, ground-based sensors, information about markets etc. It comes in the form of texts, images and measurements. We must find a way for this data to communicate with each other. And once we’ve done so, we have to make sure that all the stakeholders in the industry can get benefit from it, by accessing a level of data aggregation that does not exceed what the other stakeholders wish to make available.

How can the owners of the data be convinced to share it?

AST: Everyone has to find a business model they find satisfactory. For example, a supermarket already knows its sales volumes – it has its own processing plants and different qualities of products. What it’s interested in is obtaining data from slaughterhouses about product quality. Similarly, livestock farmers are interested in sales forecasts for different qualities of meat in order to optimize their prices. So we gave to find these kinds of virtuous business models to motivate the various stakeholders. At the same time, we have to work on spreading the word that data sharing is not just a cost. Farmers must not spend hours every day entering data without understanding its value.

What role can research play in all this? What can a platform like TeraLab contribute?

AST: We help highlight this value, by demonstrating proof of concept for business models and considering potential returns on investment. This makes it possible to overcome the natural hurdles to sharing data in this sector. When we carry out tests, we see where the value lies for each party and which tools build trust between stakeholders — which is important if we want things to go well after the research stage. And with IMT, we provide all the necessary digital expertise in terms of infrastructure and data processing.

Learn more about Teralab, IMT’s big data and AI platform.

oil pollution

Marine oil pollution detected from space

Whether it is due to oil spills or cleaning out of tanks at sea, radar satellites can detect any oil slick on the ocean’s surface. Over 15 years ago, René Garello and his team from IMT Atlantique worked on the first proof of concept for this idea to monitor oil pollution from space. Today, they are continuing to work on this technology, which they now use in partnership with maritime law enforcement. René Garello explains to us how this technology works, and what is being done to continue improving it.

Most people think of oil pollution as oils spills, but is this the most serious form of marine oil pollution?

René Garello: The accidents which cause oil spills are spectacular, but rare. If we look at the amount of oil dumped into the seas and oceans, we can see that the main source of pollution comes from deliberate dumping or washing of tanks at sea. If we look at the amount of oil released over a year or a decade, this dumping releases 10 to 100 times more oil than oil spills. Although is does not get as much media coverage, the oil released by tank washing arrives on our coastlines in exactly the same way.

Are there ways of finding out which boats are washing out their tanks?

RG: By using sensors placed on satellites, we can have large-scale surveillance technology. The sensors allow us to monitor areas of approximately 100km². The maritime areas close to the coast are our priority, since this is where the tankers stay as they cannot sail on the high seas. Satellite detection methods have improved a lot over the past decade. 15 years ago, detecting tank dumping from space was a fundamental research issue. Today, the technology is used by state authorities to fight against this practice.

How does the satellite detection process work?

RG: The process uses imaging radar technology, which has been available in large quantities for research purposes since the 2000s. This is why IMT Atlantique [at the time called Télécom Bretagne] participated in the first fundamental work on large quantities of data around 20 years ago.  The satellites emit a radar wave towards the ocean’s surface, which is reflected back towards the satellite.  The reflection of the wave is different depending on the roughness of the water’s surface. The roughness is increased by things such as the wind, currents, or waves and decreased by cold water, algae masses, or oil produced by tank dumping. When the satellite receives the radar wave, it reconstructs an image of the water and displays the roughness of the surface.  Natural, accidental or deliberate incidents which reduce the roughness appear as a black mark on the image. The project is a European Research Project which is carried out in partnership with the European Space Agency and a startup based in our laboratories, Boost Technology – which has since been acquired by CLS – and has shown the importance of this technique for detecting oil slicks.

If several things can alter the roughness of the ocean’s surface, how do you differentiate an oil slick from an upwelling of cold water or algae?

RG: It is all about investigation. You can tell whether it is an oil slick from the size and shape of the black mark. Usually, the specialist photo-interpreter behind the screen has no doubt about the source of the mark, as an oil slick has a long, regular shape which is not similar to any natural phenomena. But this is not enough. We have to carry out rigorous tests before raising an alert. We cross-reference our observations with datasets to which we have direct access, such as the weather, temperature and state of the sea, wind, and algae cycles…. All of this has to be done within 30 minutes of the slick being discovered in order to alert the maritime police quickly enough for them to take action. This operational task is carried out by CLS, using the VIGISAT satellite radar data reception station that they operate in Brest, which also involves IMT Atlantique. As well as this, we also work with Ifremer, IRD and Météo France to make the investigation faster and more efficient for the operators.

Detecting an oil spill is one thing, but how easy is it to then find the boat responsible for the pollution?

RG: Radar technology and data cross-referencing allow us to identify an oil spill accurately. However, the radar doesn’t give a clear answer as to which boat is responsible for the pollution.  The transmission speed of the information sometimes allows the authorities to find the boat which is directly responsible, but sometimes we find the spill several hours after it has been created. To solve this problem, we cross-reference radar data with the Automatic Identification System for vessels, or AIS. Every boat has an AIS which provides GPS information about its location at sea. By identifying where the slick started and the time it was made, we can identify which boats were in the area at the time that could have done a tank dumping.

It is possible to identify a boat suspected of dumping (in green) amongst several vessels in the area (in red) using satellites.

It is possible to identify a boat suspected of dumping (in green) amongst several vessels in the area (in red) using satellites.

 

This requires knowing how to date back to when the slick was created and measuring how it changed at sea.

RG: We also work in partnership with oceanographers and geophysicists. How does a slick drift? How does its shape change over time? To answer these questions, we again use data about the currents and the wind. From this data, physicists use fluid mechanics models to predict how the sea would impact an oil slick. We are very good at retracing the evolution of the slick in the hour before it is detected. When we combine this with AIS data, we can eliminate vessels whose position at the time was incompatible with the behavior of the oil on the surface. We are currently trying to do this going further back in time.

Is this research specific to oil spills, or could it be applied to other subjects?

RG: We would like to use everything that we have developed for oil for other types of pollution. At the moment we are interested in sargassum, a type of brown seaweed which is often found on the coast. Its production increases with global warming. The sargassum invades the coastline and releases harmful gases when it decomposes. We want to know whether we can use radar imaging to detect it before it arrives on the beaches. Another issue that we’re working on involves micro-plastics. They cannot be detected by satellites. We are trying to find out whether they modify the characteristics of water in a way that we can identify using secondary phenomena, such as a change in the roughness of the surface. We are also interested in monitoring and predicting the movement of large marine debris…. The possibilities are endless!

Also read on I’MTech