What is net neutrality?

Net neutrality is a legislative shield for preventing digital discrimination. Regularly defended in the media, it ensures equal access to the internet for both citizens and companies. The topic features prominently in a report on the state of the internet published on May 30 by Arcep (the French telecommunications and postal regulatory body). Marc Bourreau, a researcher at Télécom ParisTech, outlines the basics of net neutrality. He explains what it encompasses, the major threats it is facing, and the underlying economic issues.

 

How is net neutrality defined?

Marc Bourreau: Net neutrality refers to the rules requiring internet service providers (ISPs)to treat all data packets in the same manner. It aims to prevent discrimination by the network in terms of content, service, the application or identity of the source of traffic. By content, I mean all the packets included in the IP. This includes the internet, along with social media, news sites, streaming or games platforms, as well as other services such as e-mail.

If it is not respected, in what way may discrimination take place?

MB: An iconic example of discrimination occurred in the United States. An operator was offering a VoIP service similar to Skype. It had decided to block all competing services, including Skype in particular. In concrete terms, this means that customers could not make calls with an application other than the one offered by their operator. To give an equivalent, this would be like a French operator with an on-demand video service, such as Orange, deciding to block Netflix for its customers.

Net neutrality is often discussed in the news. Why is that?

MB: It is the subject of a long legal and political battle in the United States. In 2005, the American Federal Communications Commission (FCC), the regulator for telecoms, the internet and the media, determined that internet access no longer fell within the field of telecommunications from a legal standpoint. The FCC determined that it belonged rather to information services, which led to major consequences. In the name of freedom of information, the FCC had little power to regulate, therefore giving the operators greater flexibility. In 2015, the Obama administration decided to place internet access in the telecommunications category once again. The regulatory power of the FCC was therefore restored and it established three major rules in February 2015.

What are these rules?

MB: An ISP cannot block traffic except in the case of objective traffic management reasons — such as ensuring network security. An ISP cannot degrade a competitor’s service. And an ISP cannot offer pay-to-use fast lanes to access better traffic. In other words, they cannot create an internet highway. The new Trump administration has put net neutrality back in the public eye with the president’s announcement that he intends to roll back these rules. The new director of the FCC, Ajit Pai, appointed by Trump in January, has announced that he intends to reclassify internet service as belonging to the information services category.

Is net neutrality debated to the same extent in Europe?

MB: The European regulation for an open internet, adopted in November 2015, has been in force since April 30, 2016. This regulation establishes the same three rules, with a minor difference in the third rule focusing on internet highways. It uses stricter wording, thereby prohibiting any sort of discrimination. In other words, even if an operator wanted to set up a free internet highway, it could not do so.

Could the threats to net neutrality in the United States have repercussions in Europe?

MB: Not from a legislative point of view. But there could be some indirect consequences. Let’s take a hypothetical case: if American operators introduced internet highways, or charged customers for services to benefit from fast lanes, the major platforms could subscribe to these services. If Netflix had to pay for better access to networks within the United States territory, it could also raise its prices for its subscriptions to offset this cost. And that could indirectly affect European consumers.

The issue of net neutrality seems to more widely debated in the United States than in Europe. How can this be explained? 

MB: Here in Europe, net neutrality is less of a problem than it is in the United States and it is often said that it is because there is greater competition in the internet access market in Europe. I have worked on this topic with a colleague, Romain Lestage. We analyzed the impact of competition on telecoms operators’ temptation to charge content producers. The findings revealed that as market competition increases, operators obviously earn less from consumers and are therefore more likely to make attempts to charge content producers. The greater the competition, the stronger the temptation to deviate from net neutrality.

Do certain digital technologies pose a threat to net neutrality in Europe? 

MB: 5G will raise questions about the relevance of the European legislation, especially in terms of net neutrality. It was designed as technology which could provide services with very different degrees of quality. Some will be very sensitive to server response time, and others to speed. Between communications for connected devices and ultra-HD streaming, the needs are very different. This calls for creating different qualities of network service, which is, in theory, contradictory to net neutrality. Telecoms operators in Europe are using this as an argument for reviewing the regulation, in addition to ensuring that this will lead to increased investments in the sector.

Does net neutrality block investments?

MB: We studied this question with colleagues from the Innovation and Regulation of Digital Services Chair. Our research showed that without net neutrality regulation, fast lanes — internet highways— would lead to an increase in revenue for operators, which they would reinvest in network traffic management to improve service quality. Content providers who subscribe to fast lanes would benefit by offering users higher-quality content. However, these studies also showed that deregulation would lead to the degradation of free traffic lanes, to incite content providers to subscribe to the pay-to-use lanes. Net neutrality legislation is therefore crucial to limiting discrimination against content providers, and consequently, against consumers as well.

 

digital labor

What is Digital Labor?

Are we all working for digital platforms? This is the question posed by a new field of research: digital labor. Web companies use personal data to create considerable value —but what do we get in return? Antonio Casilli, a researcher at Télécom ParisTech and a specialist in digital labor, will give a conference on this topic on June 10 at Futur en Seine in Paris. In the following interview he outlines the reasons for the unbalanced relationship between platforms and users and explains its consequences.

 

What’s hiding behind the term “digital labor?”

Antonio Casilli: First of all, digital labor is a relatively new field of research. It appeared in the late 2000s and explores new ways of creating value on digital platforms. It focuses on the emergence of a new way of working, which is “taskified” and “datafied.” We must define these words in order to understand them better. “Datafied,” because it involves producing data so that digital platforms can derive value. “Taskified,” because in order to produce data effectively, human activity must be standardized and reduced to its smallest unit. This leads to the fragmentation of complex knowledge as well as to the risks of deskilling and the breakdown of traditional jobs.

 

And who exactly performs this work in question?

AC: Micro-workers who are recruited via digital platforms. They are unskilled laborers, the click workers behind the API. But, since this is becoming a widespread practice, we could say anyone who works performs digital labor. And even anyone who is a consumer. Seen from this perspective, anyone who has a Facebook, Twitter, Amazon or YouTube account is a click worker. You and I produce content —videos, photos, comments —and the platforms are interested in the metadata hiding behind this content. Facebook isn’t interested in the content of the photos you take, for example. Instead, it is interested in where and when the photo was taken, what brand of camera was used. And you produce this data in a taskified manner since all it requires is clicking on a digital interface. This is a form of unpaid digital labor since you do not receive any financial compensation for your work. But it is work nonetheless: it is a source of value which is tracked, measured, evaluated and contractually defined by the terms of use of the platforms.

 

Is there digital labor which is not done for free?

AC: Yes, that is the other category included in digital labor: micro-paid work. People who are paid to click on interfaces all day long and perform very simple tasks. These crowdworkers are mainly located in India, the Philippines, or in developing countries where average wages are low. They receive a handful of cents for each click.

 

How do platforms benefit from this labor?

AC: It helps them make their algorithms perform better. Amazon, for example, has a micro-work service called Amazon Mechanical Turk, which is almost certainly the best-known micro-work platform in the world. Their algorithms for recommending purchases, for example, need to practice on large, high-quality databases in order to be effective. Crowdworkers are paid to sort, annotate and label images of products proposed by Amazon. They also extract textual information for customers, translate comments to improve additional purchase recommendations in other languages, write product descriptions etc.

I’ve cited Amazon but it is not the only example.  All the digital giants have micro-work services. Microsoft uses UHRS, Google has its EWOQ service etc. IBM’s artificial intelligence, Watson, which has been presented as one of its greatest successes in this field, relies on MightyAI. This company pays micro-workers to train Watson, and its motto is “Train data as a service.” Micro-workers help train visual recognition algorithms by indicating elements in images, such as the sky, clouds, mountains etc. This is a very widespread practice. We must not forget that behind all artificial intelligence, there are, first and foremost, human beings. And these human beings are, above all, workers whose rights and working conditions must be respected.

Workers are paid a few cents for tasks proposed on Amazon Mechanical Turk, which includes such repetitive tasks as “answer a questionnaire about a film script.”

 

This form of digital labor is a little different from the kind I carry out because it involves more technical tasks.   

AC:  No, quite the contrary. They perform simple tasks that do not require expert knowledge. Let’s be clear: work carried out by micro-workers and crowds of anonymous users via platforms is not the ‘noble’ work of IT experts, engineers, and hackers. Rather, this labor puts downward pressure on wages and working conditions for this portion of the workforce. The risk for digital engineers today is not being replaced by robots, but rather having their jobs outsourced to Kenya or Nigeria where they will be done by code micro-workers recruited by new companies like Andela, a start-up backed by Mark Zuckerberg. It must be understood that micro-work does not rely on a complex set of knowledge. Instead it can be described as: write a line, transcribe a word, enter a number, label an image. And above all, keep clicking away.

 

Can I detect the influence of these clicks as a user?

 AC: Crowdworkers hired by genuine “click farms” can also be mobilized to watch videos, make comments or “like” something. This is often what happens during big advertising or political campaigns. Companies or parties have a budget and they delegate the digital campaign to a company, which in turn outsources it to a service provider. And the end result is two people in an office somewhere, stuck with the unattainable goal of getting one million users to engage with a tweet. Because this is impossible, they use their budget to pay crowdworkers to generate fake clicks. This is also how fake news spreads to such a great extent, backed by ill-intentioned firms who pay a fake audience. Incidentally, this is the focus of the Profane research project I am leading at Télelécom ParisTech with Benjamin Loveluck and other French and Belgian colleagues.

 

But don’t the platforms fight against these kinds of practices?

AC: Not only do they not fight against these practices, but they have incorporated them in their business models. Social media messages with a large number of likes or comments make other users more likely to interact and generate organic traffic, thereby consolidating the platform’s user base. On top of that, platforms also make use of these practices through subcontractor chains. When you carry out a sponsored campaign on Facebook or Twitter, you can define your target as clearly as you like, but you will always end up with clicks generated by micro-workers.

 

But if these crowdworkers are paid to like posts or make comments, doesn’t that raise questions about tasks carried out by traditional users?

AC: That is the crux of the issue. From the platform’s perspective, there is no difference between me and a click-worker paid by the micro-task. Both of our likes have the same financial significance. This is why we use the term digital labor to describe these two different scenarios. And it’s also the reason why Facebook is facing a class-action lawsuit filed with the Court of Justice of the European Union representing 25,000 users. They demand €500 per person for all the data they have produced. Google has also faced a claim for its Recaptcha, from users who sought to be re-classified as employees of the Mountain View firm. Recaptcha was a service which required users to confirm that they were not robots by identifying difficult-to-read words. The data collected was used to improve Google Books’ text recognition algorithms in order to digitize books. The claim was not successful, but it raised public awareness of the notion of digital labor. And most importantly, it was a wake-up call for Google, who quickly abandoned the Recaptcha system.

 

Could traditional users be paid for the data they provide?

AC: Since both micro-workers, who are paid a few cents for every click, and ordinary users perform the same sort of productive activity, this is a legitimate question to ask.  On June 1, Microsoft decided to reward Bing users with vouchers in order to convince them to use their search engine instead of Google. It is possible for a platform to have a negative price, meaning that it pays users to use the platform. The question is how to determine at what point this sort of practice is akin to a wage, and if the wage approach is both the best solution from a political viewpoint and the most socially viable. This is where we get into the classic questions posed by the sociology of labor. They can also relate to Uber drivers, who make a living from the application and whose data is used to train driverless cars. Intermediary bodies and public authorities have an important role to play in this context. There are initiatives, such as one led by the IG Metal union in Germany, which strive to gain recognition for micro-work and establish collective negotiations to assert the rights of clickworkers, and more generally, all platform workers.

 

On a broader level, we could ask what a digital platform really is.

AC: In my opinion, it would be better if we acknowledged the contractual nature of the relationship between a platform and its users. The general terms of use should be renamed “Contracts to use data and tasks provided by humans for commercial purposes,” if the aim is commercial. Because this is what all platforms have in common: extracting value from data and deciding who has the right to use it.

 

 

Dark Matter

Even without dark matter Xenon1T is a success

Xenon1T is the largest detector of dark matter in the world. Unveiled in 2015, it searches for this invisible material — which is five times more abundant in the universe than ordinary matter — from the Gran Sasso laboratory in Italy, buried under a mountain. In May 2017, an international collaboration of 130 scientists published the first observations made by the instrument. Dominique Thers, the coordinator of the experiment in France and a researcher at Subatech*, explains the importance of these initial results from Xenon1T. He gives us an overview of this cutting-edge research, which could unlock the secrets of the universe. 

[divider style=”dashed” top=”20″ bottom=”20″]
Learn more about Xenon1T by reading our article about the experiment.
[divider style=”dashed” top=”20″ bottom=”20″]

What did the Xenon1T collaboration work on between the inauguration a year and a half ago and the first results published last month?

Dominique Thers: We spent the better part of a year organizing the validation of instruments to make sure that they worked properly. The entire collaboration worked on this qualification and calibration phase between fall 2015 and fall 2016. This phase can be quite long and it’s difficult to predict how long it will take in advance. We were very satisfied to finish it in a year — a short time for such a large-scale experiment like Xenon1T.

 

So you had to wait one year before launching the first real experiment?

DT: That’s right. The first observations were launched in early December 2016. We had allowed the ton of xenon to be exposed to potential dark matter particles for exactly 34.2 days. In reality, the actual time was a bit longer, since we have to recalibrate the instruments regularly and no data is recorded during these times. This period of exposure ended on January 18, when three high-magnitude earthquakes were recorded near Gran Sasso. Due to mechanical turbulence, the instruments had to be serviced over the course of a week, and we decided at that time to proceed to what we call “unblinding.”

 

Does that mean you only discovered what you had been recording with the experiment once it was finished, rather than in real time?

DT: Yes, this is in line with the logic of our community. We perform data analysis which is independent from its acquisition. This allows us to limit bias to the maximum in our analysis, as this could occur when we stop an observation period to verify whether or not there has been an interaction between the xenon and dark matter. When we have reached a significant period of time we determine to be satisfactory, we stop the experiment and look at the data. The analysis portion is prepared in advance and everyone is ready for this moment in general. The earthquake occurred very near to the scheduled end date, so we preferred to stop the measurements then.

 

The results did not reveal interactions between the xenon and dark matter particles, which would have represented a first direct observation of dark matter. Does this mean that the collaboration has been a failure?

DT: Not at all! It’s important to understand that there is fierce competition around the globe to increase the volume of ordinary material exposed to dark matter. With a ton of xenon, Xenon1T is the world’s largest experiment, and potentially, the most likely to observe dark matter. It was out of the question to continue over a long period of time without first confirming that the experiment had reached an unprecedented level of sensitivity. With this first publication, we have proven that Xenon1T is up to the challenge. However, Xenon1T will only reach its maximum sensitivity in sessions lasting 18 to 24 months, so it holds great promise.

 

How does this sensibility work? Is Xenon1T really more sensitive than other competing experiments?

DT: A very simple but symbolic approach is to say that the more the detector is exposed to dark matter, the more likely it is to record an interaction between it and ordinary matter. We therefore have a law proportional to time. So, it’s clear to see why, after obtaining this world record in just one month, we are optimistic about the capacities of Xenon1T over an 18 to 24-month period. But we cannot go further than this point, since we would encounter an excessive level of background noise, which would hide potential observations of dark matter with Xenon1T.

 

L'expérience Xenon1T dans le laboratoire du Gran Sasso, en Italie. À gauche, le réservoir de xénon enfermé dans un caisson protecteur. À droite, les locaux box abritent les instruments d'analyse et de contrôle de l'expérience.

The Xenon1T experiment in the Gran Sasso laboratory in Italy. On the left, the xenon reservoir enclosed in protective casing. On the right, rooms housing instruments used for analysis and control in the experiment.

 

So it was more important for the Xenon1T partnership to confirm the superiority of its experiment than to directly carry out an 18 to 24-month period of exposure which may have been more conclusive?  

DT: This enabled us to confirm the quality of Xenon1T, both in terms of the scientific community and governments which support us and need to justify the investments they make. This was a way to respond to the financial and human resources provided by our partners, collaborators, and ourselves. And we do not necessarily control observation time at our level. It also depends on results from competing experiments. The idea is not to keep our eyes closed for 18 months without concerning ourselves with what is happening elsewhere. If another experiment assures that it has found traces of dark matter in an energy field where we have visibility with Xenon1T, we can stop the acquisitions in order to confirm or disprove these results. This first observation enables us to position ourselves as the best-placed authority to settle any scientific disagreements.

 

Your relationship with other experiments seems a bit unusual: you are all in competition with one another but you also need each other.

DT: It is very important to have several sites on Earth which can report a direct observation of dark matter. Naturally, we hope that Xenon1T will be the first to do so. But even if it is, we’ll still need other sites to demonstrate that the dark matter observed in Italy is the same as that observed elsewhere. But this does not mean that we cannot all improve the sensitivity of our individual experiments in order to maintain or recover the leading role in this research.

 

So Xenon1T is already looking to the future?

DT: We are already preparing the next experiment and determining what Xenon1T will be like in 2019 or 2020. The idea is to gain an order of magnitude in the mass of ordinary material exposed to potential dark matter particles with XENONnT. We are thinking of developing an instrument which will contain ten tons of xenon. In this respect we are competing with the American LZ experiment and the Chinese PandaX collaboration. They also hope to work with several tons of xenon in a few years’ time. By then, we may have already observed dark matter…

Subatech is a joint research unit between IMT Atlantique, CNRS and Université de Nantes.

 

Attacks, Virus informatiques, logiciels malveillants, malware, cyberattaque, Hervé Debar, Télécom SudParis, Malware, Cybersecurity

Viruses and malware: are we protecting ourselves adequately?

Cybersecurity incidents are increasingly gaining public attention. They are frequently mentioned in the media and discussed by specialists, such as Guillame Poupard, Director General of the French Information Security Agency. This attests to the fact that these digital incidents have an increasingly significant impact on our daily lives. Questions therefore arise about how we are protecting our digital activities, and if this protection is adequate.  The publicity surrounding security incidents may, at first glance, lead us to believe that we are not doing enough.

 

A look at the current situation

Let us first take a look at the progression of software vulnerabilities since 2001, as illustrated by the National Vulnerability Database (NVD), the reference site of the American National Institute of Standards and Technology (NIST).

 

malware, virus, cyberattack, cybersecurity

Distribution of vulnerabilities to attacks, rated by severity of vulnerability over a period of time. CC BY

 

Upon an analysis of the distribution of vulnerabilities to computer-related attacks, as published by the American National Institute of Standards and Technology (NIST) in visualizations on the National Vulnerability Database, we observe that since 2005, there has not been a significant increase in the number of vulnerabilities published each year. The distribution of risk levels (high, medium, low) has also remained relatively steady. Nevertheless, it is possible that the situation may be different in 2017, since, just halfway through the year, we have already reached publication levels similar to those of 2012.

It should be noted, however, that the growing number of vulnerabilities published in comparison to before 2005 is also partially due to a greater exposure of systems and software to attempts to compromise and external audits. For example, Google has implemented Google Project Zero, which specifically searches for vulnerabilities in programs and makes them public. It is therefore natural that more discoveries are made.

There is also an increasing number of objects, the much-discussed Internet of Things, which use embedded software, and therefore present vulnerabilities. The recent example of the “Mirai” network demonstrates the vulnerability of these environments which account for a growing portion of our digital activities. Therefore, the rise in the number of vulnerabilities published simply represents the increase in our digital activities.

 

What about the attacks?

The publicity surrounding attacks is not directly connected to the number of vulnerabilities, even if it is involved. The notion of vulnerability does not directly express the impact that this vulnerability may have on our lives. Indeed, the effect of the malicious code, WannaCry, which affected the British health system by disabling certain hospitals and emergency services, can be viewed as a significant step in the harmfulness of malicious codes. This attack led to either deaths or delayed care on an unprecedented scale.

It is always easy to say, in hindsight, that an event was foreseeable. And yet, it must be acknowledged that the use of “old” tools (Windows XP, SMBv1) in these vital systems is problematic. In the digital world, fifteen years represents three or even four generations of operating systems, unlike in the physical world, where we can have equipment dating from 20 or 30 years ago, if not even longer. Who could imagine a car being obsolete (to the point of no longer being usable) after five years? This major difference in evaluating time, which is deeply engrained in our current way of life, is largely responsible for the success and impact of the attacks we are experiencing today.

It should also be noted that in terms of both scale and impact, digital attacks are not new. In the past, worms such as CodeRed in 2001 and Slammer in 2003, also infected a number of important machines, making the internet unusable for some time. The only difference was that at the time of these attacks, critical infrastructures were less dependent on a permanent internet connection, therefore limiting the impact to the digital world alone.

The most critical attacks, however, are not those in which the attackers benefit the most. In the  Canadian Bitcoin Highjack in 2014, for example, attackers hijacked this virtual currency for a direct financial gain without disturbing the bitcoin network, while other similar attacks on routing in 2008 made the network largely unavailable without any financial gain.

So, where does all this leave us in terms of the adequacy of our digital protection?

There is no question that outstanding progress has been made in protecting information systems over the past several years. The detection of an increasing number of vulnerabilities, combined with progressively shorter periods between updates, is continually strengthening the reliability of digital services. The automation of the update process for individuals, which concerns operating systems as well as browsers, applications, telephones and tablets, has helped limit exposure to vulnerabilities.

At the same time, in the business world we have witnessed a shift towards a real understanding of the risks involved in digital uses. This, along with the introduction of technical tools and resources for training and certification, could help increase all users’ general awareness of both the risks and opportunities presented by digital technology.

 

How can we continue to reduce the risks?

After working in this field for twenty-five years, and though we must remain humble in response to the risks we face and will continue to face, I remain optimistic about the possibilities of strengthening our confidence in the digital world. Nevertheless, it appears necessary to support users in their digital activities in order to help them understand how these services work and the associated risks. ANSSI’s publication of measures for a healthy network for personal and business use is an important example of this need for information and training which will help all individuals make conscious, appropriate choices when it comes to digital use.

Another aspect, which is more oriented towards developers and service providers, is increasing the modularity of our systems. This will allow us to control access to our digital systems, make them simple to configure, and easier to update. In this way, we will continue to reduce our exposure to the risk of a computer-related attack while using our digital tools to an ever-greater extent.

Hervé Debar, Head of the Telecommunications Networks and Services department at Télécom SudParis, Télécom SudParis – Institut Mines-Télécom, Université Paris-Saclay

The original version of this article was published in French on The Conversation France.

 

Big Data, TeraLab, Anne-Sophie Taillandier

What is Big Data?

On the occasion of the Big Data Congress in Paris, which was held on 6 and 7 March at the Palais des Congrès, Anne-Sophie Taillandier, director of TeraLab, examines this digital concept which plays a leading role in research and industry.

 

Big Data is a key element in the history of data storage. It has driven an industrial revolution and is a concept inherent to 21st century research. The term first appeared in 1997, and initially described the problem of an amount of data that was too big to be processed by computer systems. These systems have greatly progressed since, and have transformed the problem into an opportunity. We talked with Anne-Sophie Taillandier, director of the Big Data platform TeraLab about what Big Data means today.

 

What is the definition of Big Data?

Anne-Sophie Taillandier: Big Data… it’s a big question. Our society, companies, and institutions have produced an enormous amount of data over the last few years. This growth has been favored by the growing number of sources (sensors, web, after-sales service, etc.). What is more, the capacities of computers have increased tenfold. We are now able to process these large volumes of data.

These data are very varied, they may be text, measurements, images, videos, or sound. They are multimodal, that is, able to be combined in several ways. They contain rich information and are worth using to optimize existing products and/or services, or to invent new approaches. In any case, it is not the quantity of the quantity of the data that is important. However, Big Data enables us to cross-reference this information with open data, and can therefore provide us with relevant information. Finally, I prefer to speak of data innovation rather than Big Data – it is more appropriate.

 

Who are the main actors and beneficiaries of Big Data?

AST: Everyone is an actor, and everyone can benefit from Big Data. All industry sectors (mobility, transport, energy, geospatial data, insurance, etc.) but also the health sector. Citizens are especially concerned by the health sector. Research is a key factor in Big Data and an essential partner to industry. The capacities of machines now allow us to establish new algorithms for processing big quantities of data. The algorithms are progressing quickly, and we are constantly pushing the boundaries.

Data security and governance are also very important. Connected objects, for example, accumulate user data. This raises the question of securing this information. Where do the data go? But also, what am I allowed to use them for? Depending on the case, anonymization might be appropriate. These are the types of questions facing the Big Data stakeholders.

 

How can society and companies use Big Data?

AST: Innovation in data helps us to develop new products and services, and to optimize already existing ones. Take the example of the automobile. Vehicles generate data allowing us to optimize maintenance. The data accumulated from several vehicles can also be useful in manufacturing the next vehicle, they can assist in the design process. These same data may also enable us to offer new services to passengers, professionals, suppliers, etc. Another important field is health. E-health promotes better healthcare follow-up and may also improve practices, making them better-adapted to the patient.

 

What technology is used to process Big Data?

AST: The technology allowing us to process data is highly varied. There are algorithmic systems, such as Machine Learning and Deep Learning. There is also artificial intelligence. Then, there are also the frameworks of open source software, or paid solutions. It is a very broad field. With Big Data, companies can open up their data in an aggregated form to develop new services. Finally, technology is advancing very quickly, and is constantly influencing companies’ strategic decisions.

Quantum computer, Romain Alléaume

What is a Quantum Computer?

The use of quantum logic for computing promises a radical change in the way we process information. The calculating power of a quantum computer could surpass that of today’s biggest supercomputers within ten years. Romain Alléaume, a researcher in quantum information at Télécom ParisTech, helps us to understand how they work.

 

Is the nature of the information in a quantum computer different?

Romain Alléaume: In classical computer science, information is encoded in bits: 1 or 0. It is not quite the same in quantum computing, where information is encoded on what, we refer to as “quantum bits” or qubits. And there is a big difference between the two. A standard bit exists in one of two states, either 0 or 1. A qubit can exist in any superposition of these two states, and can therefore have many more than two values.

There is a stark difference between using several bits or qubits. While n standard bits can only take a value among 2n possibilities, n qubits can take on any combination of these 2n states. For example, 5 bits take a value among 32 possibilities: 00000, 00001… right up to 11111. 5 qubits can take on any linear superposition of the previous 32 states, which is more than one billion states. This phenomenal expansion in the size of the space of accessible states is what explains the quantum computer’s greater computing capacity.

 

Concretely, what does a qubit look like?

RA:  Concretely, we can encode a qubit on any quantum system with two states. The most favourable experimental systems are the ones we know how to manipulate precisely. This is for instance the case with the energy levels of electrons in an atom. In quantum mechanics, the energy of an electron “trapped” around an atomic nucleus may take different values, and these energy levels take on specific “quantified” values, hence the name, quantum mechanics. We can call the first two energy levels of the atom 0 and 1: 0 corresponding to the lowest level of energy and 1 to a higher level of energy, known as the “excited state”. We can then encode a quantum bit by putting the atom in the 0 or in the 1 state, but also in any superposition (linear combination) of the 0 state and the 1 state.

To create good qubits, we have to find systems such that the quantum information remains stable over time. In practice, creating very good qubits is an experimental feat: atoms tend to interact with their surroundings and lose their information. We call this phenomenon decoherence. To avoid decoherence, we have to carefully protect the qubits, for example by putting them in very low temperature conditions.

 

What type of problems does the quantum computer solve efficiently?

RA: It exponentially increases the speed with which we can solve “promise problems”, that is, problems with a defined structure, where we know the shape of the solutions we are looking for. However, for reversing a directory for example, the quantum computer has only been proven to speed up the process by a square-root factor, compared with a regular computer. There is an increase, but not a spectacular one.

It is important to understand that the quantum computer is not magic and cannot accelerate any computational problem. In particular, one should not expect quantum computers to replace classical computers. Its main scope will probably be related to simulating quantum systems that cannot be simulated with standard computers. This will involve simulating chemical reactions or super conductivity, etc. While quantum simulators are likely to be the first concrete application of quantum computing, we know about quantum algorithms that can be applied to solve complex optimization problems, or to accelerate computations in machine learning. We can expect to see quantum processors used as co-processors, to accelerate specific computational tasks.

 

What can be the impact of the advent of large quantum computers?

RA: The construction of large quantum computers will moreover enable us to break most of the cryptography that is used today on the Internet. The advent of large quantum computer is unlikely to occur in the next 10 years. Yet, as these data are stored for years, even tens of years, we need to start thinking about new cryptographic techniques that will be resistant to the quantum computer.

Read the blog post: Confidential communications and quantum physics

 

When will the quantum computer compete with classical supercomputers?

RA: Even more than in classical computing, quantum computing requires error-correcting codes to improve the quality of the information coded on qubits, and to be scaled up. We can currently build a quantum computer with just over a dozen qubits, and we are beginning to develop small quantum computers which work with error-correcting codes. We estimate that a quantum computer must have 50 qubits in order to outperform a supercomputer, and solve problems which are currently beyond reach. In terms of time, we are not far away. Probably five years for this important step, often referred to as “quantum supremacy”.

electromagnetic waves

Our exposure to electromagnetic waves: beware of popular belief

Joe Wiart, Télécom ParisTech – Institut Mines-Télécom, Université Paris-SaclayJoe Wiart, Exposition ondes électromagnétiques

This article is published in partnership with “La Tête au carré”, the daily radio show on France Inter dedicated to the popularization of science, presented and produced by Mathieu Vidard. The author of this text, Joe Wiart, discussed his research on the show broadcast on April 28, 2017 accompanied by Aline Richard, Science and Technology Editor for The Conversation France.

 

For over ten years, controlling exposure to electromagnetic waves and to radio frequencies in particular has fueled many debates, which have often been quite heated. An analysis of reports and scientific publications devoted to this topic shows that researchers are mainly studying the possible impact of mobile phones on our health. At the same time, according to what has been published in the media, the public is mainly concerned about base stations. Nevertheless, mobile phones and wireless communication systems in general are widely used and have dramatically changed how people around the world communicate and work.

Globally, the number of mobile phone users now exceeds 5 billion. And according to the findings of an Insee study, the percentage of individuals aged 18-25 in France who own a mobile phone is 100%! It must be noted that the use of this method of communication is far from being limited to simple phone calls — by 2020 global mobile data traffic is expected to represent four times the overall internet traffic of 2005.  In France, according to the French regulatory authority for electronic and postal communications (ARCEP), over 7% of the population connected to the internet exclusively via smartphones in 2016. And the skyrocketing use of connected devices will undoubtedly accentuate this trend.

 

electromagnetic waves

Smartphone Zombies. Ccmsharma2/Wikimedia

 

The differences in perceptions of the risks associated with mobile phones and base stations can be explained in part by the fact that the two are not seen as being related. Moreover, while exposure to electromagnetic waves is considered to be “voluntary” for mobile phones, individuals are often said to be “subjected” to waves emitted by base stations. This helps explains why, despite the widespread use of mobiles and connected devices, the deployment of base stations remains a hotly debated issue, often focusing on health impacts.

In practice, national standards for limiting exposure to electromagnetic waves are based on the recommendations of the International Commission on Non-Ionizing Radiation Protection (ICNIRP) and on scientific expertise. A number of studies have been carried out on the potential effects of electromagnetic waves on our health. Of course, research is still being conducted in order to keep pace with the constant advancements in wireless technology and its many uses. This research is even more important since radio frequencies from mobile telephones have now been classified as “possibly carcinogenic for humans” (group 2B) following a review conducted by the International Agency for Research on Cancer.

Given the great and ever-growing number of young people who use smartphones and other mobile devices, this heightened vigilance is essential. In France the National Environmental and Occupational Health Research Programme (PNREST) of the National Agency for Food, Environmental and Occupational Health Safety (Anses) is responsible for monitoring the situation. And to address public concerns about base stations (of which there are 50,000 located throughout France), many municipalities have discussed charters to regulate where they may be located. Cities such as Paris, which, striving to set an example for France and major European cities, signed such a charter as of 2003, are officially limiting exposure from base stations through a signed agreement with France’s three major operators.

Exposition ondes électromagnétiques, Joe Wiart

Hillside in Miramont, Hautes Pyrenees France. Florent Pécassou/Wikimedia

This charter was updated in 2012 and was further discussed at the Paris Council in March, in keeping with the Abeille law, which was proposed to the National Assembly in 2013 and passed in February 2015, focusing on limiting the exposure to electromagnetic fields. Yet it is important to note that this initiative, like so many others, concerns only base stations despite the fact that exposure to electromagnetic waves and radio frequencies comes from many other sources. By focusing exclusively on these base stations, the problem is only partially resolved. Exposure from mobile phones for users or their neighbors must also be taken into consideration, along with other sources.

In practice, the portion of exposure to electromagnetic waves which is linked to base stations is far from representing the majority of overall exposure. As many studies have demonstrated, exposure from mobile phones is much more significant.  Fortunately, the deployment of 4G, followed by 5G, will not only improve speed but will also contribute to significantly reducing the power radiated by mobile phones. Small cell network architecture with small antennas supplementing larger ones will also help limit radiated power.  It is important to study solutions resulting in lower exposure to radio frequencies at different levels, from radio devices to network architecture or management and provision of services. This is precisely what the partners in the LEXNET European project set about doing in 2012, with the goal of cutting public exposure to electromagnetic fields and radio frequency in half.

In the near future, fifth-generation networks will use several frequency bands and various architectures in a dynamic fashion, enabling them to handle both increased speed and the proliferation of connected devices. There will be no choice but to effectively consider the network-terminal relationship as a duo, rather than treating the two as separate elements. This new paradigm has become a key priority for researchers, industry players and public authorities alike. And from this perspective, the latest discussions about the location of base stations and renewing the Paris charter prove to be emblematic.

 

Joe Wiart, Chairholder in research on Modeling, Characterization and Control of Exposition to Electromagnetic Waves at Institut Mines Telecom, Télécom ParisTech – Institut Mines-Télécom, Université Paris-Saclay

This article was originally published in French in The Conversation France The Conversation

Honoris Causa

IMT awards the title of Doctor Honoris Causa to Jay Humphrey, Professor at Yale University

This prestigious honor was awarded on 29 June at Mines Saint-Etienne by Philippe Jamet, President of IMT, in the presence of many important scientific, academic and institutional figures. IMT’s aim was to honor one of the inventors and pioneers of a new field of science – mechanobiology – which studies the effects of mechanical stress (stretches, compressions, shearing, etc.) on cells and living tissue.

 

A world specialist in cardiovascular biomechanics, Jay D. Humphrey has worked tirelessly throughout his career to galvanize the biomechanical engineering community and draw attention to the benefits that this science can offer to improve medicine.

Jay D. Humphrey works closely with the Engineering & Health Center (CIS) of Mines Saint-Etienne. In 2014, he invited Stéphane Avril, Director of the CIS, to Yale University to work on biomechanics applied to soft tissues and the prevention of ruptured aneurysms, which notably led to the award of two grants from the prestigious European Research Council:

 Biomechanics serving Healthcare

For Christian Roux, Executive Vice President for Research and Innovation at IMT, “With this award the institute wanted to recognize this important scientist, known throughout the world for the quality of his work, his commitment to the scientific community and his strong human and ethical values. Professor Humphrey also leads an exemplary partnership with one of IMT’s most cutting-edge laboratories, offering very significant development opportunities.

[author title=”biography of Jay D. Humphrey” image=”https://imtech-test.imt.fr/wp-content/uploads/2017/07/PortraitJHumphrey.jpg”]

Jay Humphrey is Professor and Chair of the Biomedical Engineering Department of the prestigious Yale University in the United States. He holds a PhD in mechanical engineering from the Georgia Institute of Technology (Atlanta, United States) and a post-doctorate in cardiovascular medicine from John Hopkins University (Baltimore, United States).

He chaired the scientific committee of the World Congress of Biomechanics in 2014, held in Boston and attended by more than 4,000 people.

He co-founded the Biomechanics and modeling in mechanobiology journal in 2002, which today plays a leading role in the field of biomechanics.

Jay D. Humphrey has written a large number of papers (245+) which have been universally praised and cited countless times (25,000+). His works are considered essential references and engineering students throughout the world rely on his introductions to biomechanics and works on cardiovascular biomechanics.

He is heavily involved in the training and support for students – from Master’s degrees to PhDs – and more than a hundred students previously under his supervision now hold posts in top American universities and major international businesses, such as Medtronic.

Jay D. Humphrey has already received a number of prestigious awards. He plays an influential role in numerous learned societies, and in the assessment committees of the National Institute of Health (NIH) in the United States.[/author]

 

energy transitions

Energy Transitions: The challenge is a global one, but the solutions are also local

For Bernard Bourges, there is no doubt: there are multiple energy transitions. He is a researcher at IMT Atlantique studying the changes in the energy sector, and takes a multi-faceted view of the changes happening in this field. He associates specificities with each situation, each territory, which instead of providing an overall solution, give a multitude of responses to the great challenges in energy today. This is one of the central points in the “Energy Transitions: mechanisms and levers” MOOC which he is running from 15 May until 17 July 2017. On this occasion, he gives us his view of the current metamorphosis in the field of energy.

 

You prefer to talk about energy transitions in the plural, rather than the energy transition. Why is the plural form justified?

Bernard Bourges: There is a lot of talk about global challenges, the question of climate change, and energy resources to face the growing population and economic development. This is the general framework, and it is absolutely undeniable. But, on the scale of a country, a territory, a household, or a company, these big challenges occur in extremely different ways. The available energy resources, the level of development, public policy, economic stakes, or the dynamics of those involved, are parameters which change between two given situations, and which have an impact on the solutions put in place.

 

Is energy transition different from one country to another?

BB: It can be. The need to switch energy model in order to reduce global warming is absolutely imperative. In vast regions, global warming is a matter of life or death for populations, like in the Pacific Islands. On the contrary, in some cases, the rising temperatures may even be seen as an opportunity for economic development: countries like Russia or Canada will gain new cultivatable land. There are also contradictions in terms of resources. The development of renewable energies means that countries with a climate suited to solar or wind power production will have greater energy independence. Also, technical advances and melting ice caps are making some fossil fuel deposits more accessible, which had previously been too costly to use. This implies a multitude of opportunities, some of which are dangerously tempting, and contradictory interests, often within the same country or the same company.

 

You highlight the importance of economic stakes. What about political decisions?

BB: Of course, there is an important political dimension, as there is a wide range of possibilities. To make the system more complex, energy is an element which overlaps with other environmental challenges, as well as social ones like employment. This results in a specific alchemy. Contradictory choices will arise, according to the importance politicians place on these great problems of society. In France as in other countries, there is a law on energy transition. But this does not mean that this apparent, inferred unanimity is real. It is important to realize that behind the scenes, there may be strong antagonism. This conditions political, social and even technological choices.

 

“Behind the question of energy, there are physical laws, and we cannot just do what we want with them.”

 

On the question of technology, there is a kind of optimism which consists in believing that science and innovation will solve the problem. Is it reasonable to believe this?

BB: This feeling is held by part of the population. We need to be careful about this point, as it is also marketing speak used to sell solutions. However, it is very clear that technology will greatly contribute to the solutions put in place, but for now there is no miracle cure. Technology will probably never be capable of satisfying all needs for growth, at a reasonable cost, and without a major impact on the climate or the environment. I often see inventors pop up, promising perpetual movement or 100% productivity rates, or even more. It’s absurd! Behind the question of energy, there are physical laws, and we cannot just do what we want with them.

 

What about the current technologies for harvesting renewable resources? They seem satisfactory on a large scale.

BB: The enthusiasm needs to be tempered. For example, there is currently a lot of enthusiasm surrounding solar power, to the point where some people imagine all households on the planet becoming energy independent thanks to solar panels. However, this utopia has a technological limit. The sun is an intermittent resource, it is only available for half the day, and only in fine weather. This energy must therefore be stored in batteries. But batteries use rare resources such as lithium, which are not limitless. Extracting these resources has environmental impacts. What could be a solution for several tens of millions of Europeans can therefore become a problem for billions of other people. This is one of the illustrations of the multifaceted nature of energy transitions, which we highlight in our MOOC.

 

Does this mean we should be pessimistic about the potential solutions provided by natural resources?

BB: The ADEME carried out a study on a 100% renewable electricity mix by 2050. One of the most symbolic conclusions was that it is possible, but that we will have to manage demand. This implies being sure that new types of usage will not appear. But this is difficult, as innovations will result in a drop in energy prices. If the costs decrease, the result will be that new types of use are made possible, which will increase demand. The realistic solution is to use a combination of solutions that use renewable resources (locally or on a large scale), intelligent management of energy networks, and innovative technologies. Managing demand is not only based on technological solutions, but also on changes in organization and behavior. Each combination will therefore be specific to a given territory of situation.

 

Doesn’t this type of solution make energy management more complex for consumers, whether individuals or companies? 

BB: This question is typical of the mistake people often make, that of limiting the question of energy to electricity. Energy is certainly a question of electricity usage, but also thermal needs, heating, and mobility. The goal for mobility will be to switch to partially electric transport modes, but we are not there yet, as this requires a colossal amount of investment. For thermal needs, the goal is to reduce demand by increasing the energy efficiency of buildings. Electricity is really only a third of the problem. Local solutions must also provide answers to other uses of energy, with completely different types of action. Having said this, electricity does take center-stage, as there are great changes underway. These changes are not only technological but also institutional (liberalization for example), difficult to understand, and sometimes even misleading for consumers.

 

What do you mean by that?

BB: For the moment, we cannot differentiate between the electrons in the network. No provider can tell you at a given moment whether you are receiving electricity produced by a wind farm, or generated by a nuclear power plant. We therefore must be wary of energy providers who tell us the opposite. This is another physical constraint. There are also legal and economic constraints. But we have understood that in this time of great change, there are many actors who are trying to win, or at least trying not to lose.

This is also why we are running this MOOC. The consumer needs to be helped in understanding the energy chain: where does energy come from? What are the basic physical laws involved? We have to try and decipher these points. But, in order to understand energy transitions, we also have to identify the constraints linked specifically to human societies and organizations. This is another point we present in the MOOC, and we make use of the diverse range of skills of people at IMT’s schools and external partners.

 

This article is part of our dossier Digital technology and energy: inseparable transitions!

 

[divider style=”normal” top=”20″ bottom=”20″]

The MOOC “Energy Transitions: mechanisms and levers” in brief

The MOOC “Energy Transitions: mechanisms and levers” at IMT is available (in French) on the “Fun” platform. It will take place from 15 May to 17 July 2017. It is aimed at both consumers wanting to gain a better understanding of energy, and professionals who want to identify specific levers for their companies.

[divider style=”normal” top=”20″ bottom=”20″]

 

4D Imaging, Mohamed Daoudi

4D Imaging for Evaluating Facial Paralysis Treatment

Mohamed Daoudi is a researcher at IMT Lille Douai, and is currently working on an advanced system of 4-dimensional imaging to measure the after-effects of peripheral facial paralysis. This tool could prove especially useful to practitioners in measuring the severity of the damage and in their assessment of the efficacy of treatment.

 

Paralysis began with my tongue, followed by my mouth, and eventually the whole side of my face”. There are many accounts of facial paralysis on forums. Whatever the origin may be, if the facial muscles are no longer responding, it is because the facial nerve stimulating them has been affected. Depending on the part of the nerve affected, the paralysis may be peripheral, in this case affecting one of the lateral parts of the face (or hemifacial), or may be central, affecting the lower part of the face.

In the case of peripheral paralysis, there are so many internet users enquiring about the origin of this problem precisely because in 80% of cases the paralysis occurs without apparent case. However, there is total recovery in more than 85 to 90% of cases. The other common causes of facial paralysis are facial trauma, and vascular or infectious causes.

During the follow-up treatment, doctors try to re-establish facial symmetry and balance in a resting position and for a facial impression. This requires treating the healthy side of the face as well as the affected side. The healthy side often presents hyperactivity, which makes it look as if the person is grimacing and creates paradoxical movements. Many medical, surgical, and physiotherapy procedures are used in the process. One of the treatments used is to inject botulinum toxin. This partially blocks certain muscles, creating balance and facial movement.

Nonetheless, there is no analysis tool that can quantify the facial damage and give an objective observation of the effects of treatment before and after injection. This is where IMT Lille Douai researcher Mohamed Daoudi[1] comes in. His specialty is 3D statistical analysis of shapes, in particular faces. He especially studies the dynamics of faces and has created an algorithm on the analysis of facial expressions making it possible to quantify deformations of a moving face.

 

Smile, you’re being scanned

Two years ago, a partnership was created between Mohamed Daoudi, Pierre Guerreschi, Yasmine Bennis and Véronique Martinot from the reconstructive and aesthetic plastic surgery department at the University Hospital of Lille. Together they are creating a tool which makes a 3D scan of a moving face. An experimental protocol was soon set up.[2]

The patients are asked to attend a 3D scan appointment before and after injecting botulinum toxin. Firstly, we ask them to make stereotypical facial expression, a smile, or raising their eyebrows. We then ask them to pronounce a sentence which triggers a maximum number of facial muscles and also tests their spontaneous movement”, explains Mohamed Daoudi.

The 4D results pre- and post-injection are then compared. The impact of the peripheral facial paralysis can be evaluated, but also quantified and compared. In this sense, the act of smiling is far from trivial. “When we smile, our muscles contract and the face undergoes many distortions. It is the facial expression which gives us the clearest image of the asymmetry caused by the paralysis”, the researcher specifies.

The ultimate goal is to manage to re-establish a patient’s facial symmetry when they smile. Of course, it is not a matter of symmetry, as no face is truly symmetrical. We are talking about socially accepted symmetry. The zones stimulated in a facial expression must roughly follow the same muscular animation as those in the other side of the face.

4D Imaging, Mohamed Daoudi, IMT Lille Douai

Scans of a smiling face: a) pre-operation, b) post-operation, c) control face.

 

Time: an essential fourth dimension in analysis

This technology is particularly well-suited to studying facial paralysis, as it takes time into account, and therefore the face’s dynamics. Dynamic analysis provides additional information. “When we look at a photo, it is sometimes impossible to detect facial paralysis. The face moves in three dimensions, and the paralysis is revealed with movement”, explains Mohamed Daoudi.

The researcher uses non-invasive technology to model the dynamics: a structured-light scanner. How does it work? A grid of light stripes is projected onto the face. This gives a face in 3D, depicted by a cloud of around 20,000 dots. Next, a sequence of images of the face making facial expressions is recorded at 15 images per second. The frames are then studied using an algorithm which calculates the deformation observed in each dot. The two sides of the face are then superimposed for comparison.

 

4D Imaging, Mohamed Daoudi, IMT Lille Douai

Series of facial expressions made during the scan.

 

Making 4D technology more readily available

Until present, this 4D imaging technique has been tested on a small number of patients between 16 and 70 years old. They have all tolerated it well. Doctors have also been satisfied with the results. They are now looking at having the technology statistically validated, in order to develop it on a larger scale. However, the equipment required is expensive. It also requires substantial human resources to acquire the images and the resulting analyses.

For Mohamed Daoudi, the project’s future lies in simplifying the technology with low-cost 3D capture systems, but other perspectives could also prove interesting. “Only one medical service in the Hauts-de-France region offers this approach, and many people come from afar to use it. In the future, we could imagine remote treatment, where all you would need is a computer and a tool like the Kinect. Another interesting market would be smartphones. Depth cameras which provide 3D images are beginning to appear on these devices, as well as tablets. Although the image quality is not yet optimal, I am sure it will improve quickly. This type of technology would be a good way of making the technology we developed more accessible”.

 

[1] Mohamed Daoudi is head of the 3D SAM team at the CRIStAL laboratory (UMR 9189). CRIStAL (Research center in Computer Science, Signal and Automatic Control of Lille) is a laboratory (UMR 9189) of the National Center for Scientific Research, University Lille 1 and Centrale Lille in partnership with University Lille 3, Inria and Institut Mines-Télécom (IMT).

[2] This project was supported by Fondation de l’avenir