vagues scélérates, rogue waves

Optics as a key to understanding rogue waves

Rogue waves are powerful waves that erupt suddenly. They are rare, but destructive. Above all, they are unpredictable. Surprisingly, researchers have been able to better understand these fascinating waves by studying similar phenomena in fiber optic lasers.

 

Before scientists began measuring and observing them, rogue waves had long been perceived as legends. They can reach a height of 30 meters, forming a wall of water facing ships. French explorer Dumont d’Urville faced one such wave in the southern hemisphere. More recently, in 1995, the commander of the transatlantic liner Queen Elizabeth II described a wave as a “solid wall of water”, adding that he felt he was sailing the boat “straight into the cliffs of Dover”. These waves are also a major cause of containers lost at sea.

A genius idea

But the rare and unpredictable nature of these mysterious waves makes them difficult to study and nearly impossible to predict. Tests have been conducted in specially designed pools, but the resulting waves are much smaller and do not sufficiently reflect reality. Theoretical models, on the other hand, are not accurate enough.

However, in 2007, Daniel Solli and his team from the University of California had the genius idea of comparing the propagation of waves with that of light pulses in optical fibers. Waves and light pulses are in fact both waves and are subject to the same laws of physics. And it is much easier to study light pulses, since all the parameters can be easily controlled: wavelength, intensity, the type of fiber used, etc. Furthermore, we can study thousands of pulses per second, making it possible to observe rare events.

Real time

Now, a group of researchers including Arnaud Mussot from IMT Lille Douai has published an article on this subject in the scientific journal Nature Physics, describing the research on the analogies between oceanography and optics to better understand rogue waves.

Many experiments have been conducted in optics,” Arnaud Mussot explains. “For these experiments, we sent laser pulses into optical fibers and we analyzed the speed of these pulses at the output of the fiber. These observations were made in real time, over extremely short periods of time–a few tens of femtoseconds, which is much less than a billionth of a millisecond.

These experiments showed that there are several ways to create rogue waves. One of the most effective methods is to make several waves crash into each other. But only wave collisions from certain angles, directions and amplitudes generate rogue wave phenomena. However, these experiments do not provide all the answers, since some of the more powerful rogue waves predicted by theorists have not yet been observed.

Predicting rogue waves

In the long term, a better understanding of rogue waves should make it possible to better predict them and prevent certain accidents. “Certain companies are currently developing radar that can map the state of the sea, which can be taken on a boat,” Arnaud Mussot explains. “This data is sent into a computing program that predicts what will happen in the sea in the next minutes. The ship can then modify its course to avoid a rogue wave or mitigate its effects. The more we improve our knowledge and calculations, the more we will succeed in predicting these waves in advance.

This research also benefits other fields, such as optics. It offers a better understanding of the start-up of high-power lasers and certain tasks the lasers perform, for which the characteristics vary as the power of the laser increases.

Article written in French by Cécile Michaut for I’MTech.

 

Facebook

Fine against Facebook: How the American FTC transformed itself into “super CNIL”

Article written in partnership with The Conversation
Winston Maxwell, Télécom Paris – Institut Mines-Télécom

[divider style=”normal” top=”20″ bottom=”20″]

[dropcap]T[/dropcap]he US consumer protection regulator has issued a record $5 billion fine to Facebook for personal data violations. This fine is by far the largest ever issued for a personal data violation. Despite some members of the US Congress saying that this is not enough, the sanction has allowed the FTC to become the most powerful personal data protection regulator in the world. And yet, the USA does not have a general data protection law.

To transform itself into a “super CNIL”, the American FTC relied on a 1914 text on consumer protection that forbids any unfair or deceptive practices in business. France has a similar law in its Consumer Code. In the USA, there is no equivalent to CNIL on a federal level. Therefore, the FTC have taken this role.

It was not easy for the FTC to transform a general text on consumer protection into a law on personal data protection. The organization faced two obstacles. Firstly, they had to create a legal doctrine that was clear enough for businesses to understand what constitutes an unfair and deceptive practice in terms of personal data. Then, they had to find a way to impose financial sanctions, since the 1914 FTC act did not include any.

Proceedings against Facebook

A Facebook office. Earthworm/Flickr, CC BY-NC-SA

To clearly define a deceptive personal data practice, the FTC created a legal doctrine that punishes any business that “fails to keep their promises” in terms of personal data. The FTC had started a first lawsuit against Facebook in 2011 accusing them of deceptive practices due to the discrepancy between what Facebook told consumers and how the company acted. To spot a deceptive practice, the FTC will examine each sentence of a company’s privacy policy to identify a promise, even an implied promise, that is not being kept.

An unfair practice is more difficult to prove, which explains why the FTC prefer to use the term ‘deceptive’ rather than unfair. The FTC considers an unfair practice to be any practice that would be both surprising and not easily avoidable for the average consumer.

The FTC Act does not allow the FTC to directly impose a financial penalty. To do this, they have to ask the US Department of Justice to file a lawsuit. To work around this issue, the FTC encourages settlement agreements. The FTC Act allows the regulator to directly impose sanctions in the event of a breach of these agreements. The most important thing for the FTC is to get an agreement signed at the time of the company’s first violation. This means that in the case of a second violation, the FTC is in a position of strength. The Facebook incident follows this pattern. Facebook signed a settlement agreement with the FTC in 2012. The FTC have now found that Facebook violated this agreement by sharing personal data with Cambridge Analytica. The violation of the agreement made in 2012 allows the FTC to hit back strongly and negotiate a new agreement that will last 20 years, this time with a $5 billion fine.

Settlement agreements

If settlement agreements allow the FTC to increase its powers, why do companies sign them? Companies put themselves in a weaker position by signing settlement agreements and the contract prepares the FTC to make these companies more vulnerable in the case of a second violation.  However, most companies prefer to negotiate an agreement with the FTC instead of going to trial. As well as the large cost of a lawsuit and the negative effect it has on a company’s image, if a company loses a lawsuit to the US government, the door is then opened for other parties to sue them, in particular with consumer class action lawsuits. Companies fear the snowball effect. In addition, a settlement agreement with the FTC does not set a precedent since the company does not admit that they are guilty in the agreement. This means that the company can claim their innocence in other lawsuits.

As well as increasing the FTC’s sanctioning powers, the settlement agreements allow them to establish detailed requirements for personal data protection. A settlement agreement with the FTC can become a mini-GDPR and binds the company for 20 years, which is the usual duration for these agreements.

The new agreement states that Facebook must gain the explicit consent of the user before they use their facial recognition data for any purpose, or before they share their mobile phone number with a third party. The 2012 agreement already required Facebook to carry out impact assessments and this obligation was reinforced in the 2019 agreement. The new agreement requires Facebook to put in place a committee of independent administrators who will manage the implementation of the agreement within the company. As well as this, Facebook’s status will have to be changed to ensure that Mark Zuckerberg is not the sole person who can dismiss those in charge of managing the obligations. The new agreement requires Mark Zuckerberg to sign a personal declaration stating that the company will comply with the commitments made in the agreement. A false declaration would put Mr Zuckerberg at risk of criminal penalties, including imprisonment. Most importantly, the agreements oblige Facebook to document all its risk reduction measures and carry out an audit every two years using an independent auditor.

The 2012 act already included a biannual audit. Following the Cambridge Analytica investigation, the EPIC association was provided a copy of an audit carried out for the 2015-2017 period. The audit did not identify any abnormalities relating to data sharing with Cambridge Analytica and other Facebook business partners. This caused the FTC to question the effectiveness of audits, leading them to strengthen the audit regulations in the new 2019 agreement.

Although the 2012 settlement agreement did not prevent Facebook from crossing the line in the Cambridge Analytica scandal, it did allow the FTC to strongly intervene and sanction this second violation. As well as the $5 billion fine, the new 2019 agreement contains several accountability measures. These measures ensure that the commitments agreed by Facebook are applied at every level of the company and that any violation will be detected quickly. Facebook’s management will not be able to say that they were not made aware and Facebook will have to adhere to these governance commitments for the next 20 years.

In the USA, it is common for companies to negotiate agreements with the government. This process is sometimes criticized as a form of forced negotiation. The $8.9 billion fine against BNP Paribas was a “negotiated” agreement, although whether the negotiation between the French bank and the US government was balanced is questionable. In Europe, there are no settlement agreements for personal data violations, but they are common in competition law.

[divider style=”dotted” top=”20″ bottom=”20″]

Winston Maxwell, Télécom Paris – Institut Mines-Télécom

The original version of this article (in French) was published in The Conversation. Read the original article

lentille autonome, autonomous contact lens

An autonomous contact lens to improve human vision

Two teams from IMT Atlantique and Mines Saint-Étienne have developed an autonomous contact lens which is powered by an integrated flexible micro-battery. This invention is a world first that opens new health prospects, whilst also opening the door for scientists to develop other human-machine interfaces.

 

Human augmentation, a field of research that aims to enhance human abilities through technological progress, has always fascinated authors of science fiction. It is a recurring theme in the British TV series Black Mirror and The Six Million Dollar Man. But beyond dystopia and action adventure, it also interests science. The recent findings from teams at IMT Atlantique and Mines Saint Étienne are one of the latest examples of this.

Jean-Louis de Bougrenet is the head of the Optics Department at IMT Atlantique. Thierry Djenizian is the head the Flexible Electronics Department at the Centre Microéléctronique de Provence at Mines Saint Étienne. Together, the two scientists have achieved a world first; they developed the cyborg lens, an autonomous contact lens with a built-in flexible micro-battery.

The origins of the cyborg lens

In the medical division of his department, Jean-Louis de Bougrenet was working on devices to improve people’s vision. Whilst doing this, the researcher and his team used oculometers, instruments used to analyze how eyes behave, measure an individual’s fatigue or stress, and also to see the direction of their gaze. These devices are used in technology such as VR headsets which can use the direction of someone’s gaze or when they blink to make a command.

However, to be truly efficient, the oculometer must be able to do two things. Firstly, it has to be extremely precise. Secondly, to make sure the VR headset does not contain any additional components that weigh the user down, the oculometer has to be as light as possible. These factors made it clear to the scientists that the oculometer had to be placed directly in the user’s eye. “The contact lens very quickly emerged as a way to make human augmentation possible, since it allowed humans to carry a smart device directly on them,” explains the optical researcher. This system is made possible by advances in nanotechnology.

Thierry Djenizian has spent the last four years working on integrating electric components onto flexible and stretchable surfaces. His research led to the patenting of a flexible micro-battery. However, this device wasn’t originally made to be used on a contact lens.

Read on I’MTech: Towards a new generation of lithium batteries?

After becoming interested in Thierry Djenizian’s work in flexible electronics, Jean-Louis de Bougrenet got in touch with his colleague at Mines Saint-Étienne. During his visit to the Centre Microélectronique de Provence à Gardanne (Bouches-du-Rhône), he observed the advances made in flexible micro-batteries. This led to the idea to integrate this little device directly into the cyborg contact lenses developed at IMT Atlantique, which was extremely innovative, since the scientists were awarded a joint patent for their work.

A flexible micro-battery directly placed in a contact lens

The device is a world first, as an energy storage source is directly incorporated into the small ocular device. “Whenever functions are performed locally by an autonomous system, the system must have energy autonomy,” explains Jean-Louis de Bougrenet.  Until now, ‘smart’ contact lenses have been powered by an external energy source such as a magnetic induction system, using a coil placed in the device. The problem with this process is that, if the energy source is cut, the device no longer works, something that is now no longer the case thanks to this new innovation. In the device developed by the two scientists, the lens is instead powered by a micro-battery that can also be paired to an external source to charge it, or in case it needs to use more energy.

Thierry Djenizian’s aim was to apply results he had already obtained from his previous studies to the design and performance constraints of an ocular device.  This meant that he built on his previous work, which was mainly based on innovation and design.

Normally, flexible batteries are made up of electrodes connected to each other by ductile coils which then carry current.  Our device uses the entire surface area occupied by the coils by carrying microelectrodes directly on these wires,” explains the researcher at Mines Saint-Étienne. In practice, during the manufacture of flexible batteries, electrodes made from several composite materials are placed on an aluminum foil and shaped into vertical ‘micropillars’ using laser ablation with regular spacing. For the lenses, the same technique is used to manufacture the coils that support microelectrodes, giving the battery great flexibility.

 

3D illustration of the flexible coils which carry the microelectrodes.

 

As well as this, the scientists aim to use innovative materials to improve the performance of the device. One example of this is for the electrolyte, which separates the battery’s two electrodes.  The polymers that are currently used will eventually be replaced by other materials with a self-repairing nature. This will offset the strain put on the battery when the device is being charged.

Now, the scientists have succeeded in making a battery that is 0.75cm² and integrated into a scleral contact lens. This lens rests on the ‘white’ of the eye (the sclera) and is both bigger and more stable than a standard contact lens. To make the device, the area of the lens which is in front of the iris is removed and replaced with microelectronic components, including the energy storage device. This method has the distinct advantage of not obstructing the wearer’s field of vision, as light can always enter the eye through the pupil free of any obstacle. The micro-battery has already proved its worth as it can power an LED for several hours. “An LED is a textbook example since it is generally the most energy-intensive microelectronic device,” says Thierry Djenizian.

Already, there are many ways to improve this technology

While the current device is already ground-breaking, the two researchers continue to try to perfect it. The priority for the Flexible Electronics Department at Mines Saint-Étienne is optimizing the system, as well as improving its reliability. “From a technological perspective, we still have a lot of work to do,” states the head of department. “We have the concept, but improving the device is similar to taking a CRT television and turning it into a modern flat screen TV.”

The next step has already been decided. The scientists want to develop miniature antennae in order to charge the battery and make the lens a communicative device, which will allow it to transmit information from the oculometer. Another option that is currently being studied uses an infrared laser to follow the user’s eyes. This laser would be activated by blinking and would show the direction that the user is looking.

Assistance for people with bad eyesight and professional uses

According to Jean-Louis Bougrenet, the innovation will allow them to “take localized engineering to the next level.” The project has a wide range of potential uses, including helping visually impaired people. The scientists have paired up with the Institut de la Vision with the aim of developing a device which can improve the sensory abilities of visually impaired people. As well as this, the cyborg lens could be used in VR headsets as a way of making visual commands.  Discussions have already started with key people in this industry.

In the future, the lenses could have several other applications. In the automotive industry, for example, they could be used to monitor the drivers’ attention or level of fatigue.  However, “at the moment we are only discussing how the lenses could be used professionally, or for people with disabilities. If we make the lenses available to the general public, for example to be used when driving, then we raise issues that go far beyond the technical aspects of the device. This is because it involves people’s consent to wear the device, which is not a trivial matter,” states Jean-Louis de Boygrenet.

Even if the cyborg lens can help humans, there is still some way to go before the device can be seen in an entirely positive light.

This article was written (in French) by Bastien Contreras for I’MTech

 

quantique, quantum technology

20 terms for understanding quantum technology

Quantum mechanics is central to much of the technology we use every day. But what exactly is it? The 11th Fondation Mines-Télécom booklet explores the origins of quantum technology, revealing its practical applications by offering a better understanding of the issues. To clarify the concepts addressed, the booklet includes a glossary, from which this list is taken.

 

Black-body radiation – Thermal radiation of an ideal object absorbing all the electromagnetic energy it receives.

Bra-ket notation (from the word bracket) – Formalism that facilitates the writing of equations in quantum mechanics.

Coherent detectors – Equipment used to detect photons based on amplitude and the phase of the electromagnetic signal rather than interactions with other particles.

Decoherence – Each possibility of a quantum superposition state interacts with its environment at a degree of complexity that makes the different possibilities incoherent and unobservable.

Entanglement – Phenomenon in which two quantum systems present quantum states that are dependent on one another, regardless of the distance separating them.

Locality (principle of) – The idea that two distant objects cannot directly influence each other.

Momentum – Product of the mass and velocity vector of a hypothetical object in time.

NISQ (Noisy Intermediate-Scale Quantum) – Current class of quantum computers

Observable (noun) – Concept in the quantum world comparable to a physical value (position, momentum, etc.) in the classical world.

Quanta – The smallest indivisible unit (of energy, momentum, etc.)

Quantum Hall effect – classical Hall effect refers to the phenomenon of voltage created by an electric current flowing through material immersed in a magnetic field. According to the conditions, this voltage increases in increments. This is the quantum Hall effect.

Quantum state – A concept that differs from a classical physical system, in which measured physical values like position and speed are sufficient in defining the system. A quantum state provides a probability distribution for each observable of the quantum system to which it refers.

Quantum system – Refers to an object studied in a context in which its quantum properties are interesting, such as a photon, mass of particles, etc.

Qubit – Refers to a quantum system in which a given observable (the spin for example) is the superposition of two independent quantum states.

Spin – Like the electric charge, one of the properties of particles.

Superposition principle – Principle that a same quantum state can have several values for one of its given observables.

The Schrödinger wave function – A fundamental concept of quantum mechanics, a mathematical function representing the quantum state of a quantum system.

Uncertainty Principle – Mathematical inequality that expresses a fundamental limit to the level of precision with which two physical properties of a same particle can be simultaneously known.

Wave function collapse – Fundamental concept of quantum mechanics that states that after a measurement, a quantum system’s state is reduced to what was measured.

Wave-particle duality (or wave-corpuscle duality) – The principle that a physical object sometimes has wave properties and sometimes corpuscular properties.

Also read on I’MTech

CloudButton

CloudButton: Big Data in one click

Projets européens H2020Launched in January 2019 for a three-year period, the European H2020 project CloudButton seeks to democratize Big Data by drastically simplifying its programming model. To achieve this, the project relies on a new cloud service that frees the final customer from having to physically manage servers. Pierre Sutra, researcher at Télécom SudParis, one the CloudButton partner, shares his perspective on the project.

 

What is the purpose of the project?

Pierre Sutra: Modern computer architectures are massively distributed across machines and a single click can require the computations from tens to hundreds of servers. However, it is very difficult to build this type of system, since it requires linking together many heterogeneous components. The key objective of CloudButton is to radically simplify this approach to programming.

How do you intend to do this? 

PS: To accomplish this feat, the project builds on a recent concept that will profoundly change computer architectures: Function-as-a-Service (FaaS). FaaS makes it possible to invoke a function in the cloud on-demand, as if it was a local computation. Since it uses the cloud, a huge number of functions can be invoked concurrently, and only the usage is charged—with millisecond precision. It is a little like having your own supercomputer on demand.

Where did the idea for the CloudButton project come from?

PS: The idea came from a discussion with colleagues from the Spanish university Rovira i Virgili (URV) during the 2017 ICDCS in Atlanta (International Conference on Distributed Computing Systems). We had just presented a new storage layer for programming distributed systems. This layer was attractive, yet it lacked an application that would make it a true technological novelty. At the time, the University of Berkeley offered an approach for writing parallel applications on top of FaaS. We agreed that this was what we needed to move forward. It would allow us to use our storage system with the ultimate goal of moving single-computer applications to the cloud with minimal effort. The button metaphor illustrates this concept.

Who are your partners in this project?

PS: The consortium brings together five academic partners: URV (Tarragona, Spain), Imperial College (London, UK), EMBL (European Molecular Biology Laboratory, Heidelberg, Germany), The Pirbright Institute (Surrey, UK) and IMT, and several industrial partners, including IBM and RedHat. The institutes specializing in genomics (The Pirbright Institute) and molecular biology (EMBL) will be the end users of the software. They also provide us with new use cases and issues.

Can you give us an example of a use case?

PS: EMBL offers its associate researchers access to a large bank of images from around the world. These images are stamped with information on the subject’s chemical composition by combining artificial intelligence and the expertise of EMBL researchers. For now, the system must calculate the stamps in advance. A use case for CloudButton would be for these computations to be performed on-demand, to customize user requests, for example.

How are Télécom SudParis researchers contributing to this project?

PS: Télécom SudParis is working on the storage layer for CloudButton. The goal is to design programming abstractions that are as similar as possible to what standard programming languages are. Of course, these abstractions must also be effective for the FaaS delivery model. This research is being conducted in collaboration with IBM and RedHat.

What technological and scientific challenges are you facing?

PS: In its current state, storage systems are not designed to handle massively parallel computations over a short period of time. The first challenge is therefore to adapt storage to the FaaS model. The second challenge is to reduce the synchronization between parallel tasks to a strict minimum in order to maximize performance. The third challenge is fault tolerance. Since the computations run on large-scale infrastructure, this infrastructure is regularly subject to errors. However, the faults must be hidden in order to display a simplified programming interface.

What are the expected benefits of this project?

PS: The success of a project like CloudButton can take several forms. Our first goal is to allow the institutes and companies involved in the project to resolve their computing and big data issues. On the other hand, the software we are developing could also meet with success among the open source community. Finally, we hope that this project will produce new design principles for computer system architectures that will be useful in the long run.

What are the important next steps in this project?

PS: We will meet with the European Commission one year from now for a mid-term assessment. So far, the prototypes and applications we have developed are encouraging. By then, I hope we will be able to present an ambitious computing platform based on an innovative use case.

 

[divider style=”normal” top=”20″ bottom=”20″]

The CloudButton consortium partners

data brokers

Data brokers: the middlemen running the markets

Over the past 5 years, major digital technology stakeholders have boosted the data broker business. These middlemen collect and combine masses of traces that consumers leave online. They then offer them to the companies of their choice in exchange for income. Above all, they use this capital to manipulate markets around the world. These new powerful stakeholders are greatly misunderstood. Patrick Waelbroeck, an economist at Télécom Paris, studies this phenomenon in the context of the Chair he cofounded dedicated to Values and Policies of Personal Information.

 

Data brokers have existed since the 1970s and the dawn of direct marketing. These data middlemen collect, sort and prepare data from consumers for companies in need of market analysis. But since the advent of the Web, data brokers like Acxiom, Epsilon and Quantum have professionalized this activity. Unlike their predecessors, they are the ones who choose the partners to whom they will sell the information. They employ tens of thousands of individuals, with turnover sometimes exceeding 1 billion dollars.

As early as 2015, in an article entitled The Black Box Society, Franck Pasquale, a law professor at the University of Maryland, identified over 4,000 data brokers in a 156-billion-dollar market. In 2014, according to the American Federal Trade Commission (FTC), one of these companies held information on 1.4 billion transactions carried out by American consumers, and over 700 billion aggregate items!

Yet these staggering figures are already dated, since technology giants have joined the data broker game over the past five years. Still, “economists are taking no notice of the issue and do not understand it,” says Patrick Waelbroeck, professor of industrial economics and econometrics at Télécom Paris. In the context of the IMT Chair Values and Policies of Personal Information, he specifically studies the effect of data brokers on fair competition and the overall economy.

Opaque activities

There are supply and demand dynamics, companies that buy, collect, modulate and build databases and sell them in the form of targeted market segments based on the customer’s needs,” the researcher adds. Technology giants have long understood that personal data is of little value on its own. A data broker’s activities entail not only finding and collecting data on or offline. More importantly, they must combine it to describe increasingly targeted market segments.

5 years ago, the FTC already estimated that some data brokers held over 3,000 categories of information on each American, from first and last names, addresses, occupations and family situations to intentions to purchase a car and wedding plans. But unlike “native” data brokers, technology giants do not sell this high value-added information directly. They exchange it for services and compensation. We know nothing about these transactions and activities, and it is impossible to measure their significance.

A market manipulation tool

One of the key messages from our research has been that these data brokers, and digital technology giants in especially, do not only collect data to sell or exchange,” says Patrick Waelbroeck. “They use it to alter market competition.” They are able to finely identify market potential for a company or a product anywhere in the world, giving them extraordinary leverage.

Imagine, for example, a small stakeholder who has the monopoly on a market in China,” says the economist. “A data broker who has data analysis indicating an interest in this company’s market segment for a Microsoft or Oracle product, for example, has the power to disrupt this competitive arena. For a variety of reasons—the interest of a customer, disruption of a competitor, etc.—they can sell the information to one of the major software companies to support them or, on the other hand, decide to support a Chinese company instead.

As a practical example of this power, in 2018, British Parliament revealed internal emails from Facebook. The conversations suggest that the Californian company may have favored third-party applications such as Netflix by sharing certain market data, while limiting access to smaller applications like Vine. “In economics, this is called a spillover effect on other markets,” Patrick Waelbroeck explains. “By selling more or less data to certain market competitors, data brokers can make the market more or less competitive and choose to favor or disadvantage a given stakeholder. ”

In a traditional market, the interaction between supply and demand introduces a natural form of self-regulation. In choosing one brand rather than another, the consumer exercises countervailing power. Internet users could do the same. But digital market mechanics are so difficult to understand that there are no users doing this. Although users regularly leave Facebook to prevent it from invading their privacy, it is unlikely they will do the same to prevent the social network from distorting competition by selling their data.

Data neutrality?

One of our Chair’s key messages is the observation of a total ignorance of the influence of data brokers,” Patrick Waelbroeck continues. “No one is pondering this issue of data brokers manipulating market competition. Not even regulators. Yet existing mechanisms could be used as a source of inspiration in countering this phenomenon.” The concept of net neutrality, for example, which in theory enables everyone to have the same access to all online services, could inspire data neutrality. It would prevent certain data brokers or digital stakeholders from deciding to favor certain companies over others by providing them with their data analysis.

Read more on IMTech What is Net Neutrality?

Another source of inspiration for regulation is the natural resource market. Some resources are considered as common goods. If only a limited number of people have access to a natural resource, competition is distorted, and the rejection of a commercial transaction can be sanctioned. Finally, an equivalent measure for intellectual property rights could be applied to data. Certain patents, which are necessary in complying with a standard, are regarded as raw materials and are therefore protected. The companies holding these “essential patents” are required by regulation to grant a license to all who want to use them at a reasonable and non-discriminatory rate.

The value of the data involved in digital mergers and acquisitions

In the meantime, pending regulation, the lack of knowledge about data brokers among competition authorities is leading to dangerous collateral damage. Unaware of the true value of certain mergers and acquisitions, like those between Google and DoubleClick, WhatsApp and Facebook, or Microsoft and LinkedIn, competition authorities use a traditional market analysis approach.

They see the two companies as belonging to different markets–for example WhatsApp as an instant messaging service and Facebook a social network–and in general conclude that they would not gain any market power in joining forces than they had individually. “That is entirely false!”, Patrick Waelbroeck protests. “They are absolutely in the same sector, that of data brokerage. After the union of these duos, they all merged their user databases and increased the number of their users. ”

 We must view the digital world through a new lens,” the researcher concludes. “All of us–economists, regulators, politicians and citizens–must understand this new data economy and its significant influence on markets and competition. In fact, in the long-term, all companies, even the most traditional ones, will be data brokers. Those unable to follow suit may well disappear. ”

Article by Emmanuelle Bouyeux for I’MTech

 

Christiine Lors

Interactions Materials-Microorganisms

This book is devoted to biocolonization, the biodeterioration of materials and possible improvements in their performance. Many materials age according to their use and their environment. The presence of microorganisms can then lead to biodeterioration. However, these can also help protect structures, provided their properties are used wisely. Christine LORS, researcher at IMT Lille Douai is co-author of this book published in English. Here is the presentation.

Read on I’MTech When microorganisms attack or repair materials

[box type=”shadow” align=”” class=”” width=””]This multidisciplinary book is the result of a collective work synthesizing presentations made by various specialists during the CNRS «BIODEMAT» school, which took place in October 2014 in La Rochelle (France). It is designed for readers of a range of scientific specialties (chemistry, biology, physics, etc.) and examines various industrial problems (e.g., water, sewerage and maintaining building materials).

Metallic, cementitious, polymeric and composite materials age depending on their service and operational environments. In such cases, the presence of microorganisms can lead to biodeterioration. However, microorganisms can also help protect structures, provided their immense possibilities are mastered and put to good use.

This book is divided into five themes related to biocolonization, material biodeterioration, and potential improvements to such materials resulting in better performance levels with respect to biodeterioration:
• physical chemistry of surfaces;
• biofilm implication in biodeterioration;
• biocorrosion of metallic materials;
• biodeterioration of non-metallic materials;
• design and modification of materials.

The affiliations of the authors of the various chapters illustrate the synergy between academic research and its transfer to industry. This demonstrates the essential interaction between the various actors in this complex field: analysing, understanding, and responding to the scientific issues related to biodeterioration.[/box]

[divider style=”normal” top=”20″ bottom=”20″]

Christine LorsInteractions Materials – Microorganisms
Concretes and Metals more Resistant to Biodeterioration
Christine Lors, Françoise Feugeas, Bernard Tribollet
EDP Sciences, 2019
416 pages
75,00 € (Paperback) – 51,99 € (PDF)

Order the book

servitization

Servitization of products: towards a value-creating economy

Businesses are increasingly turning towards selling the use of their products. This shift in business models affects SMEs and major corporations alike. In practice, this has an impact on all aspects of a company’s organization, from its design chain to developing collaborations, to rolling out new offerings for customers. Xavier Boucher and his colleagues, researchers in industrial systems design and optimization at Mines Saint-Étienne, help companies navigate this transformation.  

 

Selling uses instead of products. This shift in the business world towards a service economy has been emerging since the early 2010s. It is based on new offerings in which the product is integrated within a service, with the aim of increasing value creation. Leading manufacturers, such as Michelin, are at the forefront of this movement. With its Michelin Fleet Solutions, the company has transitioned from selling tires to selling kilometers to European commercial trucking fleets. But the trend also increasingly affects SMEs, especially as it is recognized as having many benefits including new opportunities to create value and drive growth, positive environmental impacts, building customer loyalty, increasing employee motivation and involvement.

However, such a transition is not easy to implement and requires a long-term strategy. What innovation strategies are necessary? What services should be rolled out and how? What company structures and expertise must be put in place? It all depends on market developments, the economic impacts of such a transformation on a company and the means to implement it, whether alone or with partners, to achieve a sustainable transformation. More generally, shifting a company’s focus to a product-service system means changing its business model. With his team, Xavier Boucher, a researcher at Mines Saint-Étienne, supports companies in this shift.

In the Rhône-Alpes region where he carries out his research, the majority of manufacturers are developing a service dimension to varying degrees through logistics or maintenance activities. “But out of the 150,000 companies in the region, only a few hundred have truly shifted their focus to selling services and to product life-cycle management,” explains the researcher. Nevertheless, his team is facing increasing demand from manufacturers.

Tailored support for companies

The transition from a model of selling products to a product-service system involves a number of issues of company organization, reconfiguration of the production chain and customer relationship management, which the researchers analyze using models. After a diagnostic phase, the goal is often to support a company with its transformation plan. The first step is changing how a product is designed. “When we design a product, we have to consider all the transformations that will make it possible to develop services throughout its use and throughout all the other phases of its life cycle,” explains Xavier Boucher. As such, it is often useful to equip a product with sensors so that its performance and life cycle can be traced when in customers’ possession. But production management is also impacted: this business strategy is part of a broader context of agility. The goal? Create value that is continually evolving through flexible and reconfigurable industrial processes in alignment with this purpose.

To this end, Xavier Boucher’s team develops different tools ranging from strategic analysis to decision support tools to bring a solution to market. “For example, we’ve created a business model that can be used while developing a new service offering to determine the right value creation chain to put in place and the best way for the company to sell the service,” says the researcher. Using a generic simulation platform and a customization approach, the researchers tailor these economic  calculators to manufacturers’ specific circumstances.

This is important since each situation is unique and requires a tailored business model. Indeed, marketing a mobile phone and deploying a cleaning robot will not rely on the same channels of action. The latter will call for services including customized installation for customers, maintenance and upgradability as well as management of consumables and measuring and guaranteeing cleaning quality. Moreover, companies vary in terms of their progress toward servitization. The researchers may collaborate with a start-up that has adopted a product-service model from the outset or with companies with established business models looking for a tailored, long-term transformation.

What are the steps toward a product-service system?

Companies may call upon the expertise of the Mines Saint-Étienne researchers at various stages in their progress toward this transition. For example, a manufacturer may be considering in advance how selling a service would impact its economic balance. Would such a change be the right move based on its situation? In this case, the models establish a progressive trajectory for its transformation and break it down into steps.

Another type of collaboration may be developed with a company who is ready to move towards selling services and is debating how to carry out its initial offering. Researchers use their simulation tools to determine three possible business models: the first is to market its product and add on the sale of services throughout its lifecycle; the second is to shift the company’s business to selling services and integrate the product within a service; and finally, the third model sells performance to customers.

The researchers helped the SME Innovtec develop an autonomous robot offering for industrial cleaning. “We developed computer-aided design tools: modeling, organizational scenarios, simulations. The goal was to expand the traditional product-oriented tools by adding a service package dimension,” explains Xavier Boucher. The company thus benefitted from different scenarios: identifying technologies to ensure its robots’ performance, determining which services were appropriate for this new product etc. But the projections also address topics beyond the production chain, such as whether it should integrate the new services within the current company or create a new legal structure to deploy them.

A final possibility is a company that has already made the transition to servitization but is seeking to go further, as is the case for Clextral, a SME that produces extrusion machines used by the food processing industry, which was supported through the European DiGiFoF project (Digital Skills for Factories of the Future). Its machines have a long service life and provide an opportunity to create value through maintenance and upgrading operations. The researchers have therefore identified a service development focus based on a retrofitting service, a sort of technical upgrade. This consists of exchanging obsolete parts while maintaining a machine’s configuration, or modifying the configuration of a piece of equipment to allow for a different industrial use than originally intended.

Digitization and risk management in multi-stakeholder context

The current trend towards servitization has been made possible by the digitization of companies. The Internet of Things has enabled companies to monitor their machines’ performance. In several years’ time, it may even be possible to fully automate the monitoring process, from ordering spare parts to scheduling on-site intervention. Smart product-service systems to combine digitization and servitization is a research focus and is a central part of the work carried out with elm.leblanc, a company seeking to put in place a real-time information processing to respond to customers more quickly.

However, this change in business models affects not only a company, but its entire ecosystem.  For example, elm.leblanc is considering sharing costs and risks between various stakeholders. One option, for example, would be to incorporate partner companies to implement this service. But how would the economic value or brand image be distributed between the partners without them taking over the company’s market? Research on managing risk and uncertainty is of key importance for Xavier Boucher’s team. “One of the challenges of our work is the number of potential failures for companies, due to the difficulties of effectively managing the transition. Although servitization has clearly been shown to be a path to the future, it is not associated with immediate, consistent economic success. It’s essential to anticipate challenges.”

Article written (in French) by Anaïs Culot for I’MTech

partage de données, data sharing

Data sharing: an important issue for the agricultural sector

Agriculture is among the sectors most affected by digital transition, given the amount of data it possesses. But for the industry to benefit from its full potential, it must be able to find a sound business model for sharing this data. Anne-Sophie Taillandier, the Director of TeraLab — IMT’s big data and AI platform — outlines the digital challenges facing this sector in five answers.

 

How important of an issue is data in the agricultural sector?

Anne-Sophie Taillandier: It’s one of the most data-intensive sectors and has been for a long time. This data comes from tools used by farmers, agricultural cooperatives and distribution operations, up to the border with the agrifood industry behind it. Data is therefore found at every step. It’s an extremely competitive industry, so the economic stakes for using data are huge.

How can this great quantity of data in the sector be explained?

AST: Agriculture has used sensors for a long time. The earliest IoT (Internet of Things) systems were dedicated to collecting weather data, and were therefore quickly used in farming to make forecasts. And tractors are state-of-the-art vehicles in terms of intelligence – they were among the earliest autonomous vehicles. Farms also use drones to  survey land. And precision agriculture is based on satellite imagery to optimize harvests while using as few resources as possible. On the livestock farming side, infrastructures also have a wealth of data about the quality and health of animals. And all of these examples only have to do with the production portion.

What challenges does agriculture face in relation to data?

AST: The tricky question is determining who has access to which data and in what context. These data sharing issues arise in other sectors too, but there are scientific hurdles that are specific to agriculture. The data is heterogeneous: it comes from satellites, ground-based sensors, information about markets etc. It comes in the form of texts, images and measurements. We must find a way for this data to communicate with each other. And once we’ve done so, we have to make sure that all the stakeholders in the industry can get benefit from it, by accessing a level of data aggregation that does not exceed what the other stakeholders wish to make available.

How can the owners of the data be convinced to share it?

AST: Everyone has to find a business model they find satisfactory. For example, a supermarket already knows its sales volumes – it has its own processing plants and different qualities of products. What it’s interested in is obtaining data from slaughterhouses about product quality. Similarly, livestock farmers are interested in sales forecasts for different qualities of meat in order to optimize their prices. So we gave to find these kinds of virtuous business models to motivate the various stakeholders. At the same time, we have to work on spreading the word that data sharing is not just a cost. Farmers must not spend hours every day entering data without understanding its value.

What role can research play in all this? What can a platform like TeraLab contribute?

AST: We help highlight this value, by demonstrating proof of concept for business models and considering potential returns on investment. This makes it possible to overcome the natural hurdles to sharing data in this sector. When we carry out tests, we see where the value lies for each party and which tools build trust between stakeholders — which is important if we want things to go well after the research stage. And with IMT, we provide all the necessary digital expertise in terms of infrastructure and data processing.

Learn more about Teralab, IMT’s big data and AI platform.

oil pollution

Marine oil pollution detected from space

Whether it is due to oil spills or cleaning out of tanks at sea, radar satellites can detect any oil slick on the ocean’s surface. Over 15 years ago, René Garello and his team from IMT Atlantique worked on the first proof of concept for this idea to monitor oil pollution from space. Today, they are continuing to work on this technology, which they now use in partnership with maritime law enforcement. René Garello explains to us how this technology works, and what is being done to continue improving it.

Most people think of oil pollution as oils spills, but is this the most serious form of marine oil pollution?

René Garello: The accidents which cause oil spills are spectacular, but rare. If we look at the amount of oil dumped into the seas and oceans, we can see that the main source of pollution comes from deliberate dumping or washing of tanks at sea. If we look at the amount of oil released over a year or a decade, this dumping releases 10 to 100 times more oil than oil spills. Although is does not get as much media coverage, the oil released by tank washing arrives on our coastlines in exactly the same way.

Are there ways of finding out which boats are washing out their tanks?

RG: By using sensors placed on satellites, we can have large-scale surveillance technology. The sensors allow us to monitor areas of approximately 100km². The maritime areas close to the coast are our priority, since this is where the tankers stay as they cannot sail on the high seas. Satellite detection methods have improved a lot over the past decade. 15 years ago, detecting tank dumping from space was a fundamental research issue. Today, the technology is used by state authorities to fight against this practice.

How does the satellite detection process work?

RG: The process uses imaging radar technology, which has been available in large quantities for research purposes since the 2000s. This is why IMT Atlantique [at the time called Télécom Bretagne] participated in the first fundamental work on large quantities of data around 20 years ago.  The satellites emit a radar wave towards the ocean’s surface, which is reflected back towards the satellite.  The reflection of the wave is different depending on the roughness of the water’s surface. The roughness is increased by things such as the wind, currents, or waves and decreased by cold water, algae masses, or oil produced by tank dumping. When the satellite receives the radar wave, it reconstructs an image of the water and displays the roughness of the surface.  Natural, accidental or deliberate incidents which reduce the roughness appear as a black mark on the image. The project is a European Research Project which is carried out in partnership with the European Space Agency and a startup based in our laboratories, Boost Technology – which has since been acquired by CLS – and has shown the importance of this technique for detecting oil slicks.

If several things can alter the roughness of the ocean’s surface, how do you differentiate an oil slick from an upwelling of cold water or algae?

RG: It is all about investigation. You can tell whether it is an oil slick from the size and shape of the black mark. Usually, the specialist photo-interpreter behind the screen has no doubt about the source of the mark, as an oil slick has a long, regular shape which is not similar to any natural phenomena. But this is not enough. We have to carry out rigorous tests before raising an alert. We cross-reference our observations with datasets to which we have direct access, such as the weather, temperature and state of the sea, wind, and algae cycles…. All of this has to be done within 30 minutes of the slick being discovered in order to alert the maritime police quickly enough for them to take action. This operational task is carried out by CLS, using the VIGISAT satellite radar data reception station that they operate in Brest, which also involves IMT Atlantique. As well as this, we also work with Ifremer, IRD and Météo France to make the investigation faster and more efficient for the operators.

Detecting an oil spill is one thing, but how easy is it to then find the boat responsible for the pollution?

RG: Radar technology and data cross-referencing allow us to identify an oil spill accurately. However, the radar doesn’t give a clear answer as to which boat is responsible for the pollution.  The transmission speed of the information sometimes allows the authorities to find the boat which is directly responsible, but sometimes we find the spill several hours after it has been created. To solve this problem, we cross-reference radar data with the Automatic Identification System for vessels, or AIS. Every boat has an AIS which provides GPS information about its location at sea. By identifying where the slick started and the time it was made, we can identify which boats were in the area at the time that could have done a tank dumping.

It is possible to identify a boat suspected of dumping (in green) amongst several vessels in the area (in red) using satellites.

It is possible to identify a boat suspected of dumping (in green) amongst several vessels in the area (in red) using satellites.

 

This requires knowing how to date back to when the slick was created and measuring how it changed at sea.

RG: We also work in partnership with oceanographers and geophysicists. How does a slick drift? How does its shape change over time? To answer these questions, we again use data about the currents and the wind. From this data, physicists use fluid mechanics models to predict how the sea would impact an oil slick. We are very good at retracing the evolution of the slick in the hour before it is detected. When we combine this with AIS data, we can eliminate vessels whose position at the time was incompatible with the behavior of the oil on the surface. We are currently trying to do this going further back in time.

Is this research specific to oil spills, or could it be applied to other subjects?

RG: We would like to use everything that we have developed for oil for other types of pollution. At the moment we are interested in sargassum, a type of brown seaweed which is often found on the coast. Its production increases with global warming. The sargassum invades the coastline and releases harmful gases when it decomposes. We want to know whether we can use radar imaging to detect it before it arrives on the beaches. Another issue that we’re working on involves micro-plastics. They cannot be detected by satellites. We are trying to find out whether they modify the characteristics of water in a way that we can identify using secondary phenomena, such as a change in the roughness of the surface. We are also interested in monitoring and predicting the movement of large marine debris…. The possibilities are endless!

Also read on I’MTech