Nouvelle génération batterie lithium

Towards a new generation of lithium batteries?

The development of connected devices requires the miniaturization and integration of electronic components. Thierry Djenizian, a researcher at Mines Saint-Étienne, is working on new micrometric architectures for lithium batteries, which appear to be a very promising solution for powering smart devices. Following his talk at the IMT symposium devoted to energy in the context of the digital transition, he gives us an overview of his work and the challenges involved in his research.

Why is it necessary to develop a new generation of batteries?

Thierry Djenizian: First of all, it’s a matter of miniaturization. Since the 1970s the Moore law which predicts an increase in performance of microelectronic devices with their miniaturization has been upheld. But in the meantime, the energy aspect has not really kept up. We are now facing a problem: we can manufacture very sophisticated sub-micrometric components, but the energy sources we have to power them are not integrated in the circuits because they take up too much space. We are therefore trying to design micro-batteries which can be integrated within the circuits like other technological building blocks. They are highly anticipated for the development of connected devices, including a large number of wearable applications (smart textiles for example), medical devices, etc.

 

What difficulties have you encountered in miniaturizing these batteries?

TD: A battery is composed of three elements: two electrodes and an electrolyte separating them. In the case of micro-batteries, it is essentially the contact surface between the electrodes and the electrolyte that determines storage performances: the greater the surface, the better the performance. But in decreasing the size of batteries, and therefore, the electrodes and the electrolyte, there comes a point when the contact surface is too small and battery performance is decreased.

 

How do you go beyond this critical size without compromising performance?

TD: One solution is to transition from 2D geometry in which the two electrodes are thin layers separated by a third thin electrolyte layer, to a 3D structure. By using an architecture consisting of columns or tubes which are smaller than a micrometer, covered by the three components of the battery, we can significantly increase contact surfaces (see illustration below). We are currently able to produce this type of structure on the micrometric scale and we are working on reaching the nanometric scale by using titanium nanotubes.

 

On the left, a battery with a 2D structure. On the right, a battery with a 3D structure: the contact surface between the electrodes and the electrolyte has been significantly increased.

 

How do these new battery prototypes based on titanium nanotubes work?

TD: Let’s take a look at the example of a low battery. One of the electrodes is composed of lithium, nickel, manganese and oxygen. When you charge this battery by plugging it in, for example, the additional electrons set off an electrochemical reaction which frees the lithium from this electrode in the form of ions. The lithium ions migrate through the electrolyte and insert themselves into the nanotubes which make up the other electrode. When all the nanotube sites which can hold lithium have been filled, the battery is charged. During the discharging phase, a spontaneous electrochemical reaction is produced, freeing the lithium ions from the nanotubes toward the nickel-manganese-oxygen electrode thereby generating the desired current.

 

What is the lifetime of these batteries?

TD: When a battery is working, great structural modifications take place; the materials swell and shrink in size due to the reversible insertion of lithium ions. And I’m not talking about small variations in size: the size of an electrode can become eight times larger in the case of 2D batteries which use silicon! Nanotubes provide a way to reduce this phenomenon and therefore help prolong the lifetime of these batteries. In addition, we are also carrying out research on electrolytes based on self-repairing polymers. One of the consequences of this swelling is that the contact interfaces between the electrodes and the electrolyte are altered. With an electrolyte that repairs itself, the damage will be limited.

 

Do you have other ideas for improving these 3D-architecture batteries?  

TD: One of the key issues for microelectronic components is flexibility. Batteries are no exception to this rule, and we would like to make them stretchable in order to meet certain requirements. However, the new lithium batteries we are discussing here are not yet stretchable: they fracture when subjected to mechanical stress. We are working on making the structure stretchable by modifying the geometry of the electrodes. The idea is to have a spring-like behavior: coupled with a self-repairing electrolyte, after deformation, batteries return to their initial position without suffering irreversible damage. We have a patent pending for this type of innovation. This could represent a real solution for making autonomous electronic circuits both flexible and stretchable, in order to satisfy a number of applications, such as smart electronic textiles.

 

This article is part of our dossier Digital technology and energy: inseparable transitions!

 

 

cloud computing, Maurice Gagnaire

Cloud computing for longer smartphone battery life

How can we make our smartphone batteries last longer? For Maurice Gagnaire, a researcher at Télécom ParisTech, the solution could come through mobile cloud computing. If computations currently performed by our devices could be offloaded to local servers, their batteries would have to work less. This could extend the battery life for one charge by several hours. This solution was presented at the IMT symposium on April 28, which examined the role of the digital transition in the energy sector.

 

Woe is me! My battery is dead!” So goes the thinking of dismayed users everywhere when they see the battery icon on their mobile phones turn red. Admittedly, smartphone battery life is a rather sensitive subject. Rare are those who use their smartphones for the normal range of purposes — as a telephone, for web browsing, social media, streaming videos, etc. — whose batteries last longer than 24 hours. Extending battery life is a real challenge, especially in light of the emergence of 5G, which will open the door for new energy-intensive uses such as ultra HD streaming or virtual reality, not to mention the use of devices as data aggregators for the Internet of Things (IoT). The Next Generation Mobile Networks Alliance (NGMN) has issued a recommendation to extend mobile phone battery life to three days between charges.

There are two major approaches possible in order to achieve this objective: develop a new generation of batteries, or find a way for smartphones to consume less battery power. In the laboratories of Télécom ParisTech, Maurice Gagnaire, a researcher in the field of cloud computing and energy-efficient data networks, is exploring the second option. “Mobile devices consume a great amount of energy,” he says. “In addition to having to carry out all the computations for the applications being used, they are also constantly working on connectivity in the background, in order to determine which base station to connect to and the optimal speed for communication.” The solution being explored by Maurice Gagnaire and his team is based on reducing smartphones’ energy consumption for computations related to applications. The scientists started out by establishing a hierarchy of applications according to their energy demands as well as to their requirements in terms of response time. A tool used to convert an audio sequence into a written text, for example, does not present the same constraints as a virus detection tool or an online game.

Once they had carried out this first step, the researchers were ready to tackle the real issue — saving energy. To do so, they developed a mobile cloud computing solution in which the most frequently-used and energy-consuming software tools are supposed to be available in nearby servers, called cloudlets. Then, when a telephone has to carry out a computation for one of its applications, it offloads it to the cloudlet in order to conserve battery power. Two major tests determine whether to offload the computation. The first one is based on an energy assessment: how much battery life will be gained? This depends on the effective capacity of the radio interface at a particular location and time. The second test involves quality expectations for user experience: will use of the application be significantly impacted or not?

Together, these two tests form the basis for the MAO (Mobile Applications Offloading) algorithm developed by Telecom ParisTech. The difficulty in its development arose from its interdependence with such diverse aspects as hardware architecture of mobile phone circuitry, protocols used for radio interface, and factoring in user mobility. In the end, “the principle is similar to what you find in airports where you connect to a local server located at the Wi-Fi spot,” explains Maurice Gagnaire. But in the case of energy savings, the service is intended to be “universal” and not linked to a precise geographic area as is the case for airports. In addition to offering browsing or tourism services, cloudlets would host a duplicate of the most widely-used applications. When a telephone has low battery power, or when it is responding to great user demand for several applications, the MAO algorithm makes it possible to autonomously offload computations from the mobile device to cloudlets.

 

Extending battery life by several hours?

Through a collaboration with researchers from Arizona State University in Tempe (USA), the theoretical scenarios studied in Paris were implemented in real-life situations. The preliminary results show that for the most demanding applications, such as voice recognition, a 90% reduction in energy consumption can be obtained when the cloudlet is located at the base of the antenna in the center of the cell. The experiments underscored the great impact of the self-learning function of the database linked to the voice recognition tool on the performances of the MAO algorithm.

Extending the use of the MAO algorithm to a broad range of applications could expand the scale of the solution. In the future, Maurice Gagnaire plans to explore the externalization of certain tasks carried out by graphics processers (or GPUs) in charge of managing smartphones’ high-definition touchscreens. Mobile game developers should be particularly interested in this approach.

More generally, Maurice Gagnaire’s team now hopes to collaborate with a network operator or equipment manufacturer. A partnership would provide an opportunity to test the MAO algorithm and cloudlets on a real use case and therefore determine the large-scale benefits for users. It would also offer operators new perspectives for next-generation base stations, which will have to be created to accompany the development of 5G planned for 2020.

 

This article is part of our dossier Digital technology and energy: inseparable transitions!

 

 

 

telecommunications, energy transition, Loutfi Nuaymi

Energy and telecommunications: brought together by algorithms

It is now widely accepted that algorithms can have a transformative effect on a particular sector. In the field of telecommunications, they may indeed greatly impact how energy is produced and consumed. Between reducing the energy consumption of network facilities and making better use of renewable energy, operators have embarked on a number of large-scale projects. And each time, algorithms have been central to these changes. The following is an overview of the transformations currently taking place and findings from research by Loutfi Nuaymi, a researcher in telecommunications at IMT Atlantique. On April 28 he gave a talk about this subject at the IMT symposium dedicated to energy and the digital revolution.

 

20,000: the average number of relay antennae owned by a mobile operator in France. Also called “base stations,” they represent 70% of the energy bill for telecommunications operators. Since each station transmits with a power of approximately 1 kW, reducing their demand for electricity is a crucial issue for operators in order to improve the energy efficiency of their networks. To achieve this objective, the sector is currently focusing more on technological advances in hardware than on the software component. Due to the latest advances, a recent base station consumes significantly less energy for data throughout that is nearly a hundred times higher. But new algorithms that promise energy savings are being developed, including some which simply involve… switching off base stations at certain scheduled times!

This solution may seem radical since switching off a base station in a cellular network means potentially preventing users within a cell from accessing the service. Loutfi Nuaymi, a researcher in telecommunications at IMT Atlantique, is studying this topic, in collaboration with Orange. He explains that, “base stations would only be switched off during low-load times, and in urban areas where there is greater overlap between cells.” In large cities, switching off a base station from 3 to 5am would have almost no consequence, since users are likely to be located in areas covered by at least one other base station, if not more.

Here, the role of algorithms is twofold. First of all, they would manage the switching off of antennas when user demand is lowest (at night) while maintaining sufficient network coverage. Secondly, they would gradually switch the base stations back on when users reconnect (in the morning) and up to peak hours during which all cells must be activated. This technique could prove to be particularly effective in saving energy since base stations currently remain switched on at all times, even during off-peak hours.

Loutfi Nuaymi points out that, “the use of such algorithms for putting base stations on standby mode is taking time to reach operators.” Their reluctance is understandable, since interruptions in service are by their very definition the greatest fear of telecom companies. Today, certain operators could put one base station out of ten on standby mode in dense urban areas in the middle of the night. But the IMT Atlantique researcher is confident in the quality of his work and asserts that it is possible “to go even further, while still ensuring high quality service.

 

Allumer ou éteindre progressivement les stations de base le matin ou le soir en fonction de la demande des usagers est une bonne voix d'économie d'énergie pour les opérateurs.

Gradually switching base stations on or off in the morning and at night according to user demand is an effective energy-saving solution for operators.

 

While energy management algorithms already allow for significant energy savings in 4G networks, their contributions will be even greater over the next five years, with 5G technology leading to the creation of even more cells to manage. The new generation will be based on a large number of femtocells covering areas as small as ten meters — in addition to traditional macrocells with a one-kilometer area of coverage.

Femtocells consume significantly less energy, but given the large number of these cells, it may be advantageous to switch them off when not in use, especially since they are not used as the primary source of data transmission, but rather to support macrocells. Switching them off would not in any way prevent users from accessing the service. Loutfi Nuaymi describes one way this could work. “It could be based on a system in which a user’s device will be detected by the operator when it enters a femtocell. The operator’s energy management algorithm could then calculate whether it is advantageous to switch on the femtocell, by factoring in, for example, the cost of start-up or the availability of the macrocell. If it is not overloaded, there is no reason to switch on the femtocell.

 

What is the right energy mix to power mobile networks?

The value of these algorithms lies in their capacity to calculate cost/benefit ratios according to a model which takes account of a maximum number of parameters. They can therefore provide autonomy, flexibility, and quick performance in base station management. Researchers at IMT Atlantique are building on this decision-making support principle and going a step further than simply determining if base stations should be switched on or put on standby mode. In addition to limiting energy consumption, they are developing other algorithms for optimizing the energy mix used to power the network.

They begin with the observations that renewable sources of energy are less expensive, and if operators equip themselves with solar panels or wind turbines, they must also store the energy produced to make up for periodic variations in sunshine and the sporadic nature of wind. So, how can an operator decide between using stored energy, energy supplied by its own solar or wind facilities, or energy from the traditional grid, which may rely on a varying degree of smart technology?  Loutfi Nuaymi and his team are also working on user cases related to this question and have joined forces with industrial partners to test and develop algorithms which could provide some answers.

One of the very concrete questions operators ask is what size battery is best to use for storage,” says the researcher. “Huge batteries cost as much as what operators save by replacing fossil fuels with renewable energy sources. But if the batteries are too small, they will have storage problems. We’re developing algorithmic tools to help operators make these choices, and determine the right size according to their needs, type of battery used, and production capacity.”

Another question: is it more profitable to equip each base station with its own solar panel or wind turbine, or rather, create an energy farm to supply power to several antennas? The question is still being explored but preliminary findings suggest that no single organization is clearly preferential when it comes to solar panels. Wind turbines, however, are imposing objects which are sometimes refused by neighbors, making it preferential to group them together.

 

Renewable energies at the cost of diminishing quality of service?  

Once this type of constraint has been ruled out, operators must calculate the maximum proportion of renewable energies to include in the energy mix with the least possible consequences on quality of mobile service. Sunshine and wind speed are sporadic by nature. For an operator, a sudden drop in production at a wind or solar power farm could have direct consequences on network availability — no energy means no working base stations.

Loutfi Nuaymi admits that these limitations reveal the complexity of developing algorithms, “We cannot simply consider the cost of the operators’ energy bills. We must also take account of the minimum proportion of renewable energies they are willing to use so that their practices correspond to consumer expectations, the average speed of distribution to satisfy users, etc.”

Results from research in this field show that in most cases, the proportion of renewable energies used in the energy mix can be raised to 40%, with only an 8% drop in quality of service as a result. In off-peak hours, this represents only a slight deterioration and does not have a significant effect on network users’ ability to access the service.

And even if a drastic reduction in quality of service should occur, Loutfi Nuaymi has some solutions. “We have worked on a model for a mobile subscription that delays calls if the network is not available. The idea is based on the principle of overbooking planes. Voluntary subscribers — who, of course, do not have to choose this subscription— accept the risk of the network being temporarily unavailable and, in return, receive financial compensation if it affects their use.

Although this new subscription format is only one possible solution for operators, and is still a long way from becoming reality, it shows how the field of telecommunications may be transformed in response to energy issues. Questions have arisen at times about the outlook for mobile operators. Given the energy consumed by their facilities and the rise of smart grids which make it possible to manage both self-production and the resale of electricity, these digital players could, over time, come to play a significant role in the energy sector.

It is an open-ended question and there is a great deal of debate on the topic,” says Loutfi Nuaymi. “For some, energy is an entirely different line of business, while others see no reason why they should not sell the energy collected.” The controversy could be settled by new scientific studies in which the researcher is participating. “We are already performing technical-economic calculations in order to study operators’ prospects.” The highly-anticipated results could significantly transformation the energy and telecommunications market.

 

This article is part of our dossier Digital technology and energy: inseparable transitions!

 

 

What is net neutrality?

Net neutrality is a legislative shield for preventing digital discrimination. Regularly defended in the media, it ensures equal access to the internet for both citizens and companies. The topic features prominently in a report on the state of the internet published on May 30 by Arcep (the French telecommunications and postal regulatory body). Marc Bourreau, a researcher at Télécom ParisTech, outlines the basics of net neutrality. He explains what it encompasses, the major threats it is facing, and the underlying economic issues.

 

How is net neutrality defined?

Marc Bourreau: Net neutrality refers to the rules requiring internet service providers (ISPs)to treat all data packets in the same manner. It aims to prevent discrimination by the network in terms of content, service, the application or identity of the source of traffic. By content, I mean all the packets included in the IP. This includes the internet, along with social media, news sites, streaming or games platforms, as well as other services such as e-mail.

If it is not respected, in what way may discrimination take place?

MB: An iconic example of discrimination occurred in the United States. An operator was offering a VoIP service similar to Skype. It had decided to block all competing services, including Skype in particular. In concrete terms, this means that customers could not make calls with an application other than the one offered by their operator. To give an equivalent, this would be like a French operator with an on-demand video service, such as Orange, deciding to block Netflix for its customers.

Net neutrality is often discussed in the news. Why is that?

MB: It is the subject of a long legal and political battle in the United States. In 2005, the American Federal Communications Commission (FCC), the regulator for telecoms, the internet and the media, determined that internet access no longer fell within the field of telecommunications from a legal standpoint. The FCC determined that it belonged rather to information services, which led to major consequences. In the name of freedom of information, the FCC had little power to regulate, therefore giving the operators greater flexibility. In 2015, the Obama administration decided to place internet access in the telecommunications category once again. The regulatory power of the FCC was therefore restored and it established three major rules in February 2015.

What are these rules?

MB: An ISP cannot block traffic except in the case of objective traffic management reasons — such as ensuring network security. An ISP cannot degrade a competitor’s service. And an ISP cannot offer pay-to-use fast lanes to access better traffic. In other words, they cannot create an internet highway. The new Trump administration has put net neutrality back in the public eye with the president’s announcement that he intends to roll back these rules. The new director of the FCC, Ajit Pai, appointed by Trump in January, has announced that he intends to reclassify internet service as belonging to the information services category.

Is net neutrality debated to the same extent in Europe?

MB: The European regulation for an open internet, adopted in November 2015, has been in force since April 30, 2016. This regulation establishes the same three rules, with a minor difference in the third rule focusing on internet highways. It uses stricter wording, thereby prohibiting any sort of discrimination. In other words, even if an operator wanted to set up a free internet highway, it could not do so.

Could the threats to net neutrality in the United States have repercussions in Europe?

MB: Not from a legislative point of view. But there could be some indirect consequences. Let’s take a hypothetical case: if American operators introduced internet highways, or charged customers for services to benefit from fast lanes, the major platforms could subscribe to these services. If Netflix had to pay for better access to networks within the United States territory, it could also raise its prices for its subscriptions to offset this cost. And that could indirectly affect European consumers.

The issue of net neutrality seems to more widely debated in the United States than in Europe. How can this be explained? 

MB: Here in Europe, net neutrality is less of a problem than it is in the United States and it is often said that it is because there is greater competition in the internet access market in Europe. I have worked on this topic with a colleague, Romain Lestage. We analyzed the impact of competition on telecoms operators’ temptation to charge content producers. The findings revealed that as market competition increases, operators obviously earn less from consumers and are therefore more likely to make attempts to charge content producers. The greater the competition, the stronger the temptation to deviate from net neutrality.

Do certain digital technologies pose a threat to net neutrality in Europe? 

MB: 5G will raise questions about the relevance of the European legislation, especially in terms of net neutrality. It was designed as technology which could provide services with very different degrees of quality. Some will be very sensitive to server response time, and others to speed. Between communications for connected devices and ultra-HD streaming, the needs are very different. This calls for creating different qualities of network service, which is, in theory, contradictory to net neutrality. Telecoms operators in Europe are using this as an argument for reviewing the regulation, in addition to ensuring that this will lead to increased investments in the sector.

Does net neutrality block investments?

MB: We studied this question with colleagues from the Innovation and Regulation of Digital Services Chair. Our research showed that without net neutrality regulation, fast lanes — internet highways— would lead to an increase in revenue for operators, which they would reinvest in network traffic management to improve service quality. Content providers who subscribe to fast lanes would benefit by offering users higher-quality content. However, these studies also showed that deregulation would lead to the degradation of free traffic lanes, to incite content providers to subscribe to the pay-to-use lanes. Net neutrality legislation is therefore crucial to limiting discrimination against content providers, and consequently, against consumers as well.

 

digital labor

What is Digital Labor?

Are we all working for digital platforms? This is the question posed by a new field of research: digital labor. Web companies use personal data to create considerable value —but what do we get in return? Antonio Casilli, a researcher at Télécom ParisTech and a specialist in digital labor, will give a conference on this topic on June 10 at Futur en Seine in Paris. In the following interview he outlines the reasons for the unbalanced relationship between platforms and users and explains its consequences.

 

What’s hiding behind the term “digital labor?”

Antonio Casilli: First of all, digital labor is a relatively new field of research. It appeared in the late 2000s and explores new ways of creating value on digital platforms. It focuses on the emergence of a new way of working, which is “taskified” and “datafied.” We must define these words in order to understand them better. “Datafied,” because it involves producing data so that digital platforms can derive value. “Taskified,” because in order to produce data effectively, human activity must be standardized and reduced to its smallest unit. This leads to the fragmentation of complex knowledge as well as to the risks of deskilling and the breakdown of traditional jobs.

 

And who exactly performs this work in question?

AC: Micro-workers who are recruited via digital platforms. They are unskilled laborers, the click workers behind the API. But, since this is becoming a widespread practice, we could say anyone who works performs digital labor. And even anyone who is a consumer. Seen from this perspective, anyone who has a Facebook, Twitter, Amazon or YouTube account is a click worker. You and I produce content —videos, photos, comments —and the platforms are interested in the metadata hiding behind this content. Facebook isn’t interested in the content of the photos you take, for example. Instead, it is interested in where and when the photo was taken, what brand of camera was used. And you produce this data in a taskified manner since all it requires is clicking on a digital interface. This is a form of unpaid digital labor since you do not receive any financial compensation for your work. But it is work nonetheless: it is a source of value which is tracked, measured, evaluated and contractually defined by the terms of use of the platforms.

 

Is there digital labor which is not done for free?

AC: Yes, that is the other category included in digital labor: micro-paid work. People who are paid to click on interfaces all day long and perform very simple tasks. These crowdworkers are mainly located in India, the Philippines, or in developing countries where average wages are low. They receive a handful of cents for each click.

 

How do platforms benefit from this labor?

AC: It helps them make their algorithms perform better. Amazon, for example, has a micro-work service called Amazon Mechanical Turk, which is almost certainly the best-known micro-work platform in the world. Their algorithms for recommending purchases, for example, need to practice on large, high-quality databases in order to be effective. Crowdworkers are paid to sort, annotate and label images of products proposed by Amazon. They also extract textual information for customers, translate comments to improve additional purchase recommendations in other languages, write product descriptions etc.

I’ve cited Amazon but it is not the only example.  All the digital giants have micro-work services. Microsoft uses UHRS, Google has its EWOQ service etc. IBM’s artificial intelligence, Watson, which has been presented as one of its greatest successes in this field, relies on MightyAI. This company pays micro-workers to train Watson, and its motto is “Train data as a service.” Micro-workers help train visual recognition algorithms by indicating elements in images, such as the sky, clouds, mountains etc. This is a very widespread practice. We must not forget that behind all artificial intelligence, there are, first and foremost, human beings. And these human beings are, above all, workers whose rights and working conditions must be respected.

Workers are paid a few cents for tasks proposed on Amazon Mechanical Turk, which includes such repetitive tasks as “answer a questionnaire about a film script.”

 

This form of digital labor is a little different from the kind I carry out because it involves more technical tasks.   

AC:  No, quite the contrary. They perform simple tasks that do not require expert knowledge. Let’s be clear: work carried out by micro-workers and crowds of anonymous users via platforms is not the ‘noble’ work of IT experts, engineers, and hackers. Rather, this labor puts downward pressure on wages and working conditions for this portion of the workforce. The risk for digital engineers today is not being replaced by robots, but rather having their jobs outsourced to Kenya or Nigeria where they will be done by code micro-workers recruited by new companies like Andela, a start-up backed by Mark Zuckerberg. It must be understood that micro-work does not rely on a complex set of knowledge. Instead it can be described as: write a line, transcribe a word, enter a number, label an image. And above all, keep clicking away.

 

Can I detect the influence of these clicks as a user?

 AC: Crowdworkers hired by genuine “click farms” can also be mobilized to watch videos, make comments or “like” something. This is often what happens during big advertising or political campaigns. Companies or parties have a budget and they delegate the digital campaign to a company, which in turn outsources it to a service provider. And the end result is two people in an office somewhere, stuck with the unattainable goal of getting one million users to engage with a tweet. Because this is impossible, they use their budget to pay crowdworkers to generate fake clicks. This is also how fake news spreads to such a great extent, backed by ill-intentioned firms who pay a fake audience. Incidentally, this is the focus of the Profane research project I am leading at Télelécom ParisTech with Benjamin Loveluck and other French and Belgian colleagues.

 

But don’t the platforms fight against these kinds of practices?

AC: Not only do they not fight against these practices, but they have incorporated them in their business models. Social media messages with a large number of likes or comments make other users more likely to interact and generate organic traffic, thereby consolidating the platform’s user base. On top of that, platforms also make use of these practices through subcontractor chains. When you carry out a sponsored campaign on Facebook or Twitter, you can define your target as clearly as you like, but you will always end up with clicks generated by micro-workers.

 

But if these crowdworkers are paid to like posts or make comments, doesn’t that raise questions about tasks carried out by traditional users?

AC: That is the crux of the issue. From the platform’s perspective, there is no difference between me and a click-worker paid by the micro-task. Both of our likes have the same financial significance. This is why we use the term digital labor to describe these two different scenarios. And it’s also the reason why Facebook is facing a class-action lawsuit filed with the Court of Justice of the European Union representing 25,000 users. They demand €500 per person for all the data they have produced. Google has also faced a claim for its Recaptcha, from users who sought to be re-classified as employees of the Mountain View firm. Recaptcha was a service which required users to confirm that they were not robots by identifying difficult-to-read words. The data collected was used to improve Google Books’ text recognition algorithms in order to digitize books. The claim was not successful, but it raised public awareness of the notion of digital labor. And most importantly, it was a wake-up call for Google, who quickly abandoned the Recaptcha system.

 

Could traditional users be paid for the data they provide?

AC: Since both micro-workers, who are paid a few cents for every click, and ordinary users perform the same sort of productive activity, this is a legitimate question to ask.  On June 1, Microsoft decided to reward Bing users with vouchers in order to convince them to use their search engine instead of Google. It is possible for a platform to have a negative price, meaning that it pays users to use the platform. The question is how to determine at what point this sort of practice is akin to a wage, and if the wage approach is both the best solution from a political viewpoint and the most socially viable. This is where we get into the classic questions posed by the sociology of labor. They can also relate to Uber drivers, who make a living from the application and whose data is used to train driverless cars. Intermediary bodies and public authorities have an important role to play in this context. There are initiatives, such as one led by the IG Metal union in Germany, which strive to gain recognition for micro-work and establish collective negotiations to assert the rights of clickworkers, and more generally, all platform workers.

 

On a broader level, we could ask what a digital platform really is.

AC: In my opinion, it would be better if we acknowledged the contractual nature of the relationship between a platform and its users. The general terms of use should be renamed “Contracts to use data and tasks provided by humans for commercial purposes,” if the aim is commercial. Because this is what all platforms have in common: extracting value from data and deciding who has the right to use it.

 

 

Attacks, Virus informatiques, logiciels malveillants, malware, cyberattaque, Hervé Debar, Télécom SudParis, Malware, Cybersecurity

Viruses and malware: are we protecting ourselves adequately?

Cybersecurity incidents are increasingly gaining public attention. They are frequently mentioned in the media and discussed by specialists, such as Guillame Poupard, Director General of the French Information Security Agency. This attests to the fact that these digital incidents have an increasingly significant impact on our daily lives. Questions therefore arise about how we are protecting our digital activities, and if this protection is adequate.  The publicity surrounding security incidents may, at first glance, lead us to believe that we are not doing enough.

 

A look at the current situation

Let us first take a look at the progression of software vulnerabilities since 2001, as illustrated by the National Vulnerability Database (NVD), the reference site of the American National Institute of Standards and Technology (NIST).

 

malware, virus, cyberattack, cybersecurity

Distribution of vulnerabilities to attacks, rated by severity of vulnerability over a period of time. CC BY

 

Upon an analysis of the distribution of vulnerabilities to computer-related attacks, as published by the American National Institute of Standards and Technology (NIST) in visualizations on the National Vulnerability Database, we observe that since 2005, there has not been a significant increase in the number of vulnerabilities published each year. The distribution of risk levels (high, medium, low) has also remained relatively steady. Nevertheless, it is possible that the situation may be different in 2017, since, just halfway through the year, we have already reached publication levels similar to those of 2012.

It should be noted, however, that the growing number of vulnerabilities published in comparison to before 2005 is also partially due to a greater exposure of systems and software to attempts to compromise and external audits. For example, Google has implemented Google Project Zero, which specifically searches for vulnerabilities in programs and makes them public. It is therefore natural that more discoveries are made.

There is also an increasing number of objects, the much-discussed Internet of Things, which use embedded software, and therefore present vulnerabilities. The recent example of the “Mirai” network demonstrates the vulnerability of these environments which account for a growing portion of our digital activities. Therefore, the rise in the number of vulnerabilities published simply represents the increase in our digital activities.

 

What about the attacks?

The publicity surrounding attacks is not directly connected to the number of vulnerabilities, even if it is involved. The notion of vulnerability does not directly express the impact that this vulnerability may have on our lives. Indeed, the effect of the malicious code, WannaCry, which affected the British health system by disabling certain hospitals and emergency services, can be viewed as a significant step in the harmfulness of malicious codes. This attack led to either deaths or delayed care on an unprecedented scale.

It is always easy to say, in hindsight, that an event was foreseeable. And yet, it must be acknowledged that the use of “old” tools (Windows XP, SMBv1) in these vital systems is problematic. In the digital world, fifteen years represents three or even four generations of operating systems, unlike in the physical world, where we can have equipment dating from 20 or 30 years ago, if not even longer. Who could imagine a car being obsolete (to the point of no longer being usable) after five years? This major difference in evaluating time, which is deeply engrained in our current way of life, is largely responsible for the success and impact of the attacks we are experiencing today.

It should also be noted that in terms of both scale and impact, digital attacks are not new. In the past, worms such as CodeRed in 2001 and Slammer in 2003, also infected a number of important machines, making the internet unusable for some time. The only difference was that at the time of these attacks, critical infrastructures were less dependent on a permanent internet connection, therefore limiting the impact to the digital world alone.

The most critical attacks, however, are not those in which the attackers benefit the most. In the  Canadian Bitcoin Highjack in 2014, for example, attackers hijacked this virtual currency for a direct financial gain without disturbing the bitcoin network, while other similar attacks on routing in 2008 made the network largely unavailable without any financial gain.

So, where does all this leave us in terms of the adequacy of our digital protection?

There is no question that outstanding progress has been made in protecting information systems over the past several years. The detection of an increasing number of vulnerabilities, combined with progressively shorter periods between updates, is continually strengthening the reliability of digital services. The automation of the update process for individuals, which concerns operating systems as well as browsers, applications, telephones and tablets, has helped limit exposure to vulnerabilities.

At the same time, in the business world we have witnessed a shift towards a real understanding of the risks involved in digital uses. This, along with the introduction of technical tools and resources for training and certification, could help increase all users’ general awareness of both the risks and opportunities presented by digital technology.

 

How can we continue to reduce the risks?

After working in this field for twenty-five years, and though we must remain humble in response to the risks we face and will continue to face, I remain optimistic about the possibilities of strengthening our confidence in the digital world. Nevertheless, it appears necessary to support users in their digital activities in order to help them understand how these services work and the associated risks. ANSSI’s publication of measures for a healthy network for personal and business use is an important example of this need for information and training which will help all individuals make conscious, appropriate choices when it comes to digital use.

Another aspect, which is more oriented towards developers and service providers, is increasing the modularity of our systems. This will allow us to control access to our digital systems, make them simple to configure, and easier to update. In this way, we will continue to reduce our exposure to the risk of a computer-related attack while using our digital tools to an ever-greater extent.

Hervé Debar, Head of the Telecommunications Networks and Services department at Télécom SudParis, Télécom SudParis – Institut Mines-Télécom, Université Paris-Saclay

The original version of this article was published in French on The Conversation France.

 

Big Data, TeraLab, Anne-Sophie Taillandier

What is Big Data?

On the occasion of the Big Data Congress in Paris, which was held on 6 and 7 March at the Palais des Congrès, Anne-Sophie Taillandier, director of TeraLab, examines this digital concept which plays a leading role in research and industry.

 

Big Data is a key element in the history of data storage. It has driven an industrial revolution and is a concept inherent to 21st century research. The term first appeared in 1997, and initially described the problem of an amount of data that was too big to be processed by computer systems. These systems have greatly progressed since, and have transformed the problem into an opportunity. We talked with Anne-Sophie Taillandier, director of the Big Data platform TeraLab about what Big Data means today.

 

What is the definition of Big Data?

Anne-Sophie Taillandier: Big Data… it’s a big question. Our society, companies, and institutions have produced an enormous amount of data over the last few years. This growth has been favored by the growing number of sources (sensors, web, after-sales service, etc.). What is more, the capacities of computers have increased tenfold. We are now able to process these large volumes of data.

These data are very varied, they may be text, measurements, images, videos, or sound. They are multimodal, that is, able to be combined in several ways. They contain rich information and are worth using to optimize existing products and/or services, or to invent new approaches. In any case, it is not the quantity of the quantity of the data that is important. However, Big Data enables us to cross-reference this information with open data, and can therefore provide us with relevant information. Finally, I prefer to speak of data innovation rather than Big Data – it is more appropriate.

 

Who are the main actors and beneficiaries of Big Data?

AST: Everyone is an actor, and everyone can benefit from Big Data. All industry sectors (mobility, transport, energy, geospatial data, insurance, etc.) but also the health sector. Citizens are especially concerned by the health sector. Research is a key factor in Big Data and an essential partner to industry. The capacities of machines now allow us to establish new algorithms for processing big quantities of data. The algorithms are progressing quickly, and we are constantly pushing the boundaries.

Data security and governance are also very important. Connected objects, for example, accumulate user data. This raises the question of securing this information. Where do the data go? But also, what am I allowed to use them for? Depending on the case, anonymization might be appropriate. These are the types of questions facing the Big Data stakeholders.

 

How can society and companies use Big Data?

AST: Innovation in data helps us to develop new products and services, and to optimize already existing ones. Take the example of the automobile. Vehicles generate data allowing us to optimize maintenance. The data accumulated from several vehicles can also be useful in manufacturing the next vehicle, they can assist in the design process. These same data may also enable us to offer new services to passengers, professionals, suppliers, etc. Another important field is health. E-health promotes better healthcare follow-up and may also improve practices, making them better-adapted to the patient.

 

What technology is used to process Big Data?

AST: The technology allowing us to process data is highly varied. There are algorithmic systems, such as Machine Learning and Deep Learning. There is also artificial intelligence. Then, there are also the frameworks of open source software, or paid solutions. It is a very broad field. With Big Data, companies can open up their data in an aggregated form to develop new services. Finally, technology is advancing very quickly, and is constantly influencing companies’ strategic decisions.

Quantum computer, Romain Alléaume

What is a Quantum Computer?

The use of quantum logic for computing promises a radical change in the way we process information. The calculating power of a quantum computer could surpass that of today’s biggest supercomputers within ten years. Romain Alléaume, a researcher in quantum information at Télécom ParisTech, helps us to understand how they work.

 

Is the nature of the information in a quantum computer different?

Romain Alléaume: In classical computer science, information is encoded in bits: 1 or 0. It is not quite the same in quantum computing, where information is encoded on what, we refer to as “quantum bits” or qubits. And there is a big difference between the two. A standard bit exists in one of two states, either 0 or 1. A qubit can exist in any superposition of these two states, and can therefore have many more than two values.

There is a stark difference between using several bits or qubits. While n standard bits can only take a value among 2n possibilities, n qubits can take on any combination of these 2n states. For example, 5 bits take a value among 32 possibilities: 00000, 00001… right up to 11111. 5 qubits can take on any linear superposition of the previous 32 states, which is more than one billion states. This phenomenal expansion in the size of the space of accessible states is what explains the quantum computer’s greater computing capacity.

 

Concretely, what does a qubit look like?

RA:  Concretely, we can encode a qubit on any quantum system with two states. The most favourable experimental systems are the ones we know how to manipulate precisely. This is for instance the case with the energy levels of electrons in an atom. In quantum mechanics, the energy of an electron “trapped” around an atomic nucleus may take different values, and these energy levels take on specific “quantified” values, hence the name, quantum mechanics. We can call the first two energy levels of the atom 0 and 1: 0 corresponding to the lowest level of energy and 1 to a higher level of energy, known as the “excited state”. We can then encode a quantum bit by putting the atom in the 0 or in the 1 state, but also in any superposition (linear combination) of the 0 state and the 1 state.

To create good qubits, we have to find systems such that the quantum information remains stable over time. In practice, creating very good qubits is an experimental feat: atoms tend to interact with their surroundings and lose their information. We call this phenomenon decoherence. To avoid decoherence, we have to carefully protect the qubits, for example by putting them in very low temperature conditions.

 

What type of problems does the quantum computer solve efficiently?

RA: It exponentially increases the speed with which we can solve “promise problems”, that is, problems with a defined structure, where we know the shape of the solutions we are looking for. However, for reversing a directory for example, the quantum computer has only been proven to speed up the process by a square-root factor, compared with a regular computer. There is an increase, but not a spectacular one.

It is important to understand that the quantum computer is not magic and cannot accelerate any computational problem. In particular, one should not expect quantum computers to replace classical computers. Its main scope will probably be related to simulating quantum systems that cannot be simulated with standard computers. This will involve simulating chemical reactions or super conductivity, etc. While quantum simulators are likely to be the first concrete application of quantum computing, we know about quantum algorithms that can be applied to solve complex optimization problems, or to accelerate computations in machine learning. We can expect to see quantum processors used as co-processors, to accelerate specific computational tasks.

 

What can be the impact of the advent of large quantum computers?

RA: The construction of large quantum computers will moreover enable us to break most of the cryptography that is used today on the Internet. The advent of large quantum computer is unlikely to occur in the next 10 years. Yet, as these data are stored for years, even tens of years, we need to start thinking about new cryptographic techniques that will be resistant to the quantum computer.

Read the blog post: Confidential communications and quantum physics

 

When will the quantum computer compete with classical supercomputers?

RA: Even more than in classical computing, quantum computing requires error-correcting codes to improve the quality of the information coded on qubits, and to be scaled up. We can currently build a quantum computer with just over a dozen qubits, and we are beginning to develop small quantum computers which work with error-correcting codes. We estimate that a quantum computer must have 50 qubits in order to outperform a supercomputer, and solve problems which are currently beyond reach. In terms of time, we are not far away. Probably five years for this important step, often referred to as “quantum supremacy”.

4D Imaging, Mohamed Daoudi

4D Imaging for Evaluating Facial Paralysis Treatment

Mohamed Daoudi is a researcher at IMT Lille Douai, and is currently working on an advanced system of 4-dimensional imaging to measure the after-effects of peripheral facial paralysis. This tool could prove especially useful to practitioners in measuring the severity of the damage and in their assessment of the efficacy of treatment.

 

Paralysis began with my tongue, followed by my mouth, and eventually the whole side of my face”. There are many accounts of facial paralysis on forums. Whatever the origin may be, if the facial muscles are no longer responding, it is because the facial nerve stimulating them has been affected. Depending on the part of the nerve affected, the paralysis may be peripheral, in this case affecting one of the lateral parts of the face (or hemifacial), or may be central, affecting the lower part of the face.

In the case of peripheral paralysis, there are so many internet users enquiring about the origin of this problem precisely because in 80% of cases the paralysis occurs without apparent case. However, there is total recovery in more than 85 to 90% of cases. The other common causes of facial paralysis are facial trauma, and vascular or infectious causes.

During the follow-up treatment, doctors try to re-establish facial symmetry and balance in a resting position and for a facial impression. This requires treating the healthy side of the face as well as the affected side. The healthy side often presents hyperactivity, which makes it look as if the person is grimacing and creates paradoxical movements. Many medical, surgical, and physiotherapy procedures are used in the process. One of the treatments used is to inject botulinum toxin. This partially blocks certain muscles, creating balance and facial movement.

Nonetheless, there is no analysis tool that can quantify the facial damage and give an objective observation of the effects of treatment before and after injection. This is where IMT Lille Douai researcher Mohamed Daoudi[1] comes in. His specialty is 3D statistical analysis of shapes, in particular faces. He especially studies the dynamics of faces and has created an algorithm on the analysis of facial expressions making it possible to quantify deformations of a moving face.

 

Smile, you’re being scanned

Two years ago, a partnership was created between Mohamed Daoudi, Pierre Guerreschi, Yasmine Bennis and Véronique Martinot from the reconstructive and aesthetic plastic surgery department at the University Hospital of Lille. Together they are creating a tool which makes a 3D scan of a moving face. An experimental protocol was soon set up.[2]

The patients are asked to attend a 3D scan appointment before and after injecting botulinum toxin. Firstly, we ask them to make stereotypical facial expression, a smile, or raising their eyebrows. We then ask them to pronounce a sentence which triggers a maximum number of facial muscles and also tests their spontaneous movement”, explains Mohamed Daoudi.

The 4D results pre- and post-injection are then compared. The impact of the peripheral facial paralysis can be evaluated, but also quantified and compared. In this sense, the act of smiling is far from trivial. “When we smile, our muscles contract and the face undergoes many distortions. It is the facial expression which gives us the clearest image of the asymmetry caused by the paralysis”, the researcher specifies.

The ultimate goal is to manage to re-establish a patient’s facial symmetry when they smile. Of course, it is not a matter of symmetry, as no face is truly symmetrical. We are talking about socially accepted symmetry. The zones stimulated in a facial expression must roughly follow the same muscular animation as those in the other side of the face.

4D Imaging, Mohamed Daoudi, IMT Lille Douai

Scans of a smiling face: a) pre-operation, b) post-operation, c) control face.

 

Time: an essential fourth dimension in analysis

This technology is particularly well-suited to studying facial paralysis, as it takes time into account, and therefore the face’s dynamics. Dynamic analysis provides additional information. “When we look at a photo, it is sometimes impossible to detect facial paralysis. The face moves in three dimensions, and the paralysis is revealed with movement”, explains Mohamed Daoudi.

The researcher uses non-invasive technology to model the dynamics: a structured-light scanner. How does it work? A grid of light stripes is projected onto the face. This gives a face in 3D, depicted by a cloud of around 20,000 dots. Next, a sequence of images of the face making facial expressions is recorded at 15 images per second. The frames are then studied using an algorithm which calculates the deformation observed in each dot. The two sides of the face are then superimposed for comparison.

 

4D Imaging, Mohamed Daoudi, IMT Lille Douai

Series of facial expressions made during the scan.

 

Making 4D technology more readily available

Until present, this 4D imaging technique has been tested on a small number of patients between 16 and 70 years old. They have all tolerated it well. Doctors have also been satisfied with the results. They are now looking at having the technology statistically validated, in order to develop it on a larger scale. However, the equipment required is expensive. It also requires substantial human resources to acquire the images and the resulting analyses.

For Mohamed Daoudi, the project’s future lies in simplifying the technology with low-cost 3D capture systems, but other perspectives could also prove interesting. “Only one medical service in the Hauts-de-France region offers this approach, and many people come from afar to use it. In the future, we could imagine remote treatment, where all you would need is a computer and a tool like the Kinect. Another interesting market would be smartphones. Depth cameras which provide 3D images are beginning to appear on these devices, as well as tablets. Although the image quality is not yet optimal, I am sure it will improve quickly. This type of technology would be a good way of making the technology we developed more accessible”.

 

[1] Mohamed Daoudi is head of the 3D SAM team at the CRIStAL laboratory (UMR 9189). CRIStAL (Research center in Computer Science, Signal and Automatic Control of Lille) is a laboratory (UMR 9189) of the National Center for Scientific Research, University Lille 1 and Centrale Lille in partnership with University Lille 3, Inria and Institut Mines-Télécom (IMT).

[2] This project was supported by Fondation de l’avenir

 

 

 

EUROP platform, Carnot TSN, Carnot Télécom & Société numérique, Télécom Saint-Étienne

EUROP: Testing an Operator Network in a Single Room

Belles histoires, bouton, CarnotThe next article in our series on the Carnot Télécom technology platforms and digital society, with EUROP (Exchanges and Usages for Operator Networks) at Télécom Saint-Étienne. This platform offers physical testing of network configurations, to meet service providers’ needs. We discussed the platform with its director, Jacques Fayolle, and assignment manager Maxime Joncoux.

 

What is EUROP?

Jacques Fayolle: The EUROP platform was designed through a dual partnership between the Conseil Départemental de la Loire and LOTIM, a local telecommunications company and subsidiary of the Axione group. EUROP brings together researchers and engineers from Télécom Saint-Étienne, specialized in networks and telecommunications and in computer science. We are seeing an increasing convergence between infrastructure and the services that use this infrastructure. This is why these two skillsets are complementary.

The goal of the platform is to simulate an operator network in a single room. To do so, we reconstructed a full network, from the production of services to consumption by a client company or an individual. This enabled us to depict every step in the distribution chain of a network, up to the home.

 

What technology is available on the platform?

JF: We are currently using wired technologies, which make up the operator part of a fiber network. We are particularly interested in being able to compare the usage of a service according to the protocols used as the signal is transferred from the server to the final customer. For instance, we can study what happens in a housing estate when an ADSL connection is replaced by FTTH fiber optics (Fiber to the Home).

The platform’s technology evolves, but the platform as a whole never changes. All we do is add new possibilities, because what we want to do is compare technologies with each other. A telecommunications system has a lifecycle of 5 to10 years. At the beginning, we mostly used copper technology, then we added fiber point to point, then point to multi-point. This means that we now have several dozen different technologies on the platform. This roughly corresponds to all technology currently used by telecommunications operators.

Maxime Joncoux: And they are all operational. The goal is to test the technical configurations in order to understand how a particular type of technology works, according to the physical layout of the network we put in place.

 

How can a network be represented in one room?

MJ: The network is very big, but in fact it fits into a small space. If we take the example of Saint-Étienne, this network fits into a large building, but it covers all communications in the city. This represents around 100,000 copper cables that have been reduced. Instead of having 30 connections, we only have one or two. As for the 80 kilometers of fiber in this network, they are simply wound around a coil.

JF: We also have distance simulators, objects that we can configure according to the distance we want to represent. Thanks to this technology, we can reproduce a real high-speed broadband or ADSL network. This enables us to look at, for example, how a service will be consumed depending on whether we have access to a high-speed broadband network, for example in the center of Paris, or in an isolated area in the countryside, where the speed might be slower. EUROP allows us to physically test these networks, rather than using IT models.

It is not a simulation, but a real laboratory reproduction. We can set up scenarios to analyze and compare a situation with other configurations. We can therefore directly assess the potential impact of a change in technology across the chain of a service proposed by an operator.

 

Who is the platform for?

JF: We are directly targeting companies that want to innovate with the platform, either by validating service configurations or by assessing the evolution of a particular piece of equipment in order to achieve better-quality service or faster speed. The platform is also used directly by the school as a learning and research tool. Finally, it allows us to raise awareness among local officials in rural areas about how increasing bandwidth can be a way of improving their local economy.

MJ: For local officials, we aim to provide a practical guide on standardized fiber deployment. The goal is not for Paris and Lyon to have fiber within five years while the rest of France still uses ADSL.

 

EUROP platform, Télécom Saint-Étienne

EUROP platform. Credits: Télécom Saint-Étienne

 

Could you provide some examples of partnerships?

JF: We carried out a study for Adista, a local telecommunications operator. They presented the network load they needed to bear for an event of national stature. Our role was to determine the necessary configuration to meet their needs.

We also have a partnership with IFOTEC, an SME creating innovative networks near Grenoble. We worked together to provide high-speed broadband access in difficult geographical areas. That is, where the distance to the network node is greater than it is in cities. The SME has created DSL offset techniques (part of the connection uses copper, but there is fiber at the end) which provides at least 20 Mb 80 kilometers away from the network node. This is the type of industrial companies we are aiming to innovate with, looking for innovative protocols or hardware.

 

What does the Carnot accreditation bring you?

JF: The Carnot label gives us visibility. SMEs are always a little hesitant in collaborating with academics. This label brings us credibility. In addition, the associated quality charter gives our contracts more substance.

 

What is the future of the platform?

JF: Our goal is to shift towards Openstack[1] technology, which is used in large data centers. The aim is to launch into Big Data and Cloud Computing. Many companies are wondering about how to operate their services in cloud mode. We are also looking into setting up configuration systems that are adapted to the Internet of Things. This technology requires an efficient network. The EUROP platform enables us to validate the necessary configurations.

 

 [1] platform based on open-source software for cloud computing

[box type=”shadow” align=”” class=”” width=””]

The TSN Carnot institute, a guarantee of excellence in partnership-based research since 2006

 

Having first received the Carnot label in 2006, the Télécom & Société numérique Carnot institute is the first national “Information and Communication Science and Technology” Carnot institute. Home to over 2,000 researchers, it is focused on the technical, economic and social implications of the digital transition. In 2016, the Carnot label was renewed for the second consecutive time, demonstrating the quality of the innovations produced through the collaborations between researchers and companies. The institute encompasses Télécom ParisTech, IMT Atlantique, Télécom SudParis, Télécom, École de Management, Eurecom, Télécom Physique Strasbourg and Télécom Saint-Étienne, École Polytechnique (Lix and CMAP laboratories), Strate École de Design and Femto Engineering.[/box]