energy performance, performance énergétique, sanda lefteriu

Energy performance of buildings: modeling for better efficicency

Sanda Lefteriu, a researcher at IMT Lille-Douai, is working on developing predictive and control models designed for buildings with the aim of improving energy management. A look at the work presented on April 28 at the IMT “Energy in the digital revolution” symposium.

Good things come to those who wait. Seven years after the Grenelle 2 law, a decree published on May 10 requires buildings (see insert) used for the private and public service sector to improve their energy performance. The text sets a requirement to reduce consumption by 25% by 2020 and by 40% by 2030.[1] To do so, reliable, easy-to-use models must be established in order to predict the energy behavior of buildings in near-real-time. This is the goal of research being conducted by Balsam Ajib, a PhD student supervised by Sanda Lefteriu and Stéphane Lecoeuche of IMT Lille-Douai as well as by Antoine Caucheteux of Cerema.

 

A new approach for modeling thermal phenomena

State-of-the-art experimental models for evaluating energy performance of buildings use models with what are referred to as “linear” structures. This means that input variables for the model (weather, radiation, heating power etc.) are only linked to the output of this same model (temperature of a room etc.) through a linear equation. However, a number of phenomena which occur within a room, and therefore within a system, can temporarily disrupt its thermal equilibrium. For example, a large number of individuals inside a building will lead to a rise in temperature. The same is true when the sun shines on a building when its shutters are open.

Based on this observation, researchers propose using what is called a “commutation” model, which takes account of discrete events occurring at a given moment which influence the continuous behavior of the system being studied (change in temperature). “For a building, events like opening/closing windows or doors are commutations (0 or 1) which disrupt the dynamics of the system. But we can separate these actions from linear behavior in order to identify their impacts more clearly,” explains the researcher. To do so, she has developed several models, each of which correspond to a situation. “We estimate each configuration, for example a situation in which the door and windows are closed and heat is set at 20°C corresponds to one model. If we change the temperature to 22°C, we identify another and so on,” adds Sanda Lefteriu.

 

Objective: use these models for all types of buildings

To create these scenarios, researchers use real data collected inside buildings following measurement programs. Sensors were placed on the campus of IMT Lille-Douai and in passive houses which are part of the INCAS platform in Chambéry. These uninhabited residences offer a completely controlled site for experimenting since all the parameters related to the building (structure, materials) are known. These rare infrastructures make it possible to set up physical models, meaning models built according to the specific characteristics of the infrastructures being studied. “This information is rarely available so that’s why we are now working on mathematical modeling which is easier to implement,” explains Sanda Lefteriu.

We’re only at the feasibility phase but these models could be used to estimate heating power and therefore energy performance of buildings in real time,” adds the researcher. Applications will be put in place in social housing as part of the ShINE European project in which IMT Lille-Douai is taking part. The goal of this project is to reduce carbon emissions from housing.

These tools will be used for existing buildings. Once the models are operational, control algorithms installed on robots will be placed in the infrastructures. Finally, another series of tools will be used to link physical conditions with observations in order to focus new research. “We still have to identify which physical parameters change when we observe a new dynamic,” says Sanda Lefteriu. These models remain to be built, just like the buildings which they will directly serve.

 

 [1] Buildings currently represent 40-45% of energy spending in France across all sectors. Find out more+ about key energy figures in France.

 

This article is part of our dossier Digital technology and energy: inseparable transitions!

 

[box type=”shadow” align=”” class=”” width=””]

Energy performance of buildings:

The energy performance of a building includes its energy consumption and its impact in terms of greenhouse gas emissions. Consideration is given to the hot water supply system, heating, lighting and ventilation. Other building characteristics to be assessed include insulation, location and orientation. An energy performance certificate is a standardized way to measure how much energy is actually consumed or estimated to be consumed according to standard use of the infrastructure. [/box]

 

Nouvelle génération batterie lithium

Towards a new generation of lithium batteries?

The development of connected devices requires the miniaturization and integration of electronic components. Thierry Djenizian, a researcher at Mines Saint-Étienne, is working on new micrometric architectures for lithium batteries, which appear to be a very promising solution for powering smart devices. Following his talk at the IMT symposium devoted to energy in the context of the digital transition, he gives us an overview of his work and the challenges involved in his research.

Why is it necessary to develop a new generation of batteries?

Thierry Djenizian: First of all, it’s a matter of miniaturization. Since the 1970s the Moore law which predicts an increase in performance of microelectronic devices with their miniaturization has been upheld. But in the meantime, the energy aspect has not really kept up. We are now facing a problem: we can manufacture very sophisticated sub-micrometric components, but the energy sources we have to power them are not integrated in the circuits because they take up too much space. We are therefore trying to design micro-batteries which can be integrated within the circuits like other technological building blocks. They are highly anticipated for the development of connected devices, including a large number of wearable applications (smart textiles for example), medical devices, etc.

 

What difficulties have you encountered in miniaturizing these batteries?

TD: A battery is composed of three elements: two electrodes and an electrolyte separating them. In the case of micro-batteries, it is essentially the contact surface between the electrodes and the electrolyte that determines storage performances: the greater the surface, the better the performance. But in decreasing the size of batteries, and therefore, the electrodes and the electrolyte, there comes a point when the contact surface is too small and battery performance is decreased.

 

How do you go beyond this critical size without compromising performance?

TD: One solution is to transition from 2D geometry in which the two electrodes are thin layers separated by a third thin electrolyte layer, to a 3D structure. By using an architecture consisting of columns or tubes which are smaller than a micrometer, covered by the three components of the battery, we can significantly increase contact surfaces (see illustration below). We are currently able to produce this type of structure on the micrometric scale and we are working on reaching the nanometric scale by using titanium nanotubes.

 

On the left, a battery with a 2D structure. On the right, a battery with a 3D structure: the contact surface between the electrodes and the electrolyte has been significantly increased.

 

How do these new battery prototypes based on titanium nanotubes work?

TD: Let’s take a look at the example of a low battery. One of the electrodes is composed of lithium, nickel, manganese and oxygen. When you charge this battery by plugging it in, for example, the additional electrons set off an electrochemical reaction which frees the lithium from this electrode in the form of ions. The lithium ions migrate through the electrolyte and insert themselves into the nanotubes which make up the other electrode. When all the nanotube sites which can hold lithium have been filled, the battery is charged. During the discharging phase, a spontaneous electrochemical reaction is produced, freeing the lithium ions from the nanotubes toward the nickel-manganese-oxygen electrode thereby generating the desired current.

 

What is the lifetime of these batteries?

TD: When a battery is working, great structural modifications take place; the materials swell and shrink in size due to the reversible insertion of lithium ions. And I’m not talking about small variations in size: the size of an electrode can become eight times larger in the case of 2D batteries which use silicon! Nanotubes provide a way to reduce this phenomenon and therefore help prolong the lifetime of these batteries. In addition, we are also carrying out research on electrolytes based on self-repairing polymers. One of the consequences of this swelling is that the contact interfaces between the electrodes and the electrolyte are altered. With an electrolyte that repairs itself, the damage will be limited.

 

Do you have other ideas for improving these 3D-architecture batteries?  

TD: One of the key issues for microelectronic components is flexibility. Batteries are no exception to this rule, and we would like to make them stretchable in order to meet certain requirements. However, the new lithium batteries we are discussing here are not yet stretchable: they fracture when subjected to mechanical stress. We are working on making the structure stretchable by modifying the geometry of the electrodes. The idea is to have a spring-like behavior: coupled with a self-repairing electrolyte, after deformation, batteries return to their initial position without suffering irreversible damage. We have a patent pending for this type of innovation. This could represent a real solution for making autonomous electronic circuits both flexible and stretchable, in order to satisfy a number of applications, such as smart electronic textiles.

 

This article is part of our dossier Digital technology and energy: inseparable transitions!

 

 

cloud computing, Maurice Gagnaire

Cloud computing for longer smartphone battery life

How can we make our smartphone batteries last longer? For Maurice Gagnaire, a researcher at Télécom ParisTech, the solution could come through mobile cloud computing. If computations currently performed by our devices could be offloaded to local servers, their batteries would have to work less. This could extend the battery life for one charge by several hours. This solution was presented at the IMT symposium on April 28, which examined the role of the digital transition in the energy sector.

 

Woe is me! My battery is dead!” So goes the thinking of dismayed users everywhere when they see the battery icon on their mobile phones turn red. Admittedly, smartphone battery life is a rather sensitive subject. Rare are those who use their smartphones for the normal range of purposes — as a telephone, for web browsing, social media, streaming videos, etc. — whose batteries last longer than 24 hours. Extending battery life is a real challenge, especially in light of the emergence of 5G, which will open the door for new energy-intensive uses such as ultra HD streaming or virtual reality, not to mention the use of devices as data aggregators for the Internet of Things (IoT). The Next Generation Mobile Networks Alliance (NGMN) has issued a recommendation to extend mobile phone battery life to three days between charges.

There are two major approaches possible in order to achieve this objective: develop a new generation of batteries, or find a way for smartphones to consume less battery power. In the laboratories of Télécom ParisTech, Maurice Gagnaire, a researcher in the field of cloud computing and energy-efficient data networks, is exploring the second option. “Mobile devices consume a great amount of energy,” he says. “In addition to having to carry out all the computations for the applications being used, they are also constantly working on connectivity in the background, in order to determine which base station to connect to and the optimal speed for communication.” The solution being explored by Maurice Gagnaire and his team is based on reducing smartphones’ energy consumption for computations related to applications. The scientists started out by establishing a hierarchy of applications according to their energy demands as well as to their requirements in terms of response time. A tool used to convert an audio sequence into a written text, for example, does not present the same constraints as a virus detection tool or an online game.

Once they had carried out this first step, the researchers were ready to tackle the real issue — saving energy. To do so, they developed a mobile cloud computing solution in which the most frequently-used and energy-consuming software tools are supposed to be available in nearby servers, called cloudlets. Then, when a telephone has to carry out a computation for one of its applications, it offloads it to the cloudlet in order to conserve battery power. Two major tests determine whether to offload the computation. The first one is based on an energy assessment: how much battery life will be gained? This depends on the effective capacity of the radio interface at a particular location and time. The second test involves quality expectations for user experience: will use of the application be significantly impacted or not?

Together, these two tests form the basis for the MAO (Mobile Applications Offloading) algorithm developed by Telecom ParisTech. The difficulty in its development arose from its interdependence with such diverse aspects as hardware architecture of mobile phone circuitry, protocols used for radio interface, and factoring in user mobility. In the end, “the principle is similar to what you find in airports where you connect to a local server located at the Wi-Fi spot,” explains Maurice Gagnaire. But in the case of energy savings, the service is intended to be “universal” and not linked to a precise geographic area as is the case for airports. In addition to offering browsing or tourism services, cloudlets would host a duplicate of the most widely-used applications. When a telephone has low battery power, or when it is responding to great user demand for several applications, the MAO algorithm makes it possible to autonomously offload computations from the mobile device to cloudlets.

 

Extending battery life by several hours?

Through a collaboration with researchers from Arizona State University in Tempe (USA), the theoretical scenarios studied in Paris were implemented in real-life situations. The preliminary results show that for the most demanding applications, such as voice recognition, a 90% reduction in energy consumption can be obtained when the cloudlet is located at the base of the antenna in the center of the cell. The experiments underscored the great impact of the self-learning function of the database linked to the voice recognition tool on the performances of the MAO algorithm.

Extending the use of the MAO algorithm to a broad range of applications could expand the scale of the solution. In the future, Maurice Gagnaire plans to explore the externalization of certain tasks carried out by graphics processers (or GPUs) in charge of managing smartphones’ high-definition touchscreens. Mobile game developers should be particularly interested in this approach.

More generally, Maurice Gagnaire’s team now hopes to collaborate with a network operator or equipment manufacturer. A partnership would provide an opportunity to test the MAO algorithm and cloudlets on a real use case and therefore determine the large-scale benefits for users. It would also offer operators new perspectives for next-generation base stations, which will have to be created to accompany the development of 5G planned for 2020.

 

This article is part of our dossier Digital technology and energy: inseparable transitions!

 

 

 

telecommunications, energy transition, Loutfi Nuaymi

Energy and telecommunications: brought together by algorithms

It is now widely accepted that algorithms can have a transformative effect on a particular sector. In the field of telecommunications, they may indeed greatly impact how energy is produced and consumed. Between reducing the energy consumption of network facilities and making better use of renewable energy, operators have embarked on a number of large-scale projects. And each time, algorithms have been central to these changes. The following is an overview of the transformations currently taking place and findings from research by Loutfi Nuaymi, a researcher in telecommunications at IMT Atlantique. On April 28 he gave a talk about this subject at the IMT symposium dedicated to energy and the digital revolution.

 

20,000: the average number of relay antennae owned by a mobile operator in France. Also called “base stations,” they represent 70% of the energy bill for telecommunications operators. Since each station transmits with a power of approximately 1 kW, reducing their demand for electricity is a crucial issue for operators in order to improve the energy efficiency of their networks. To achieve this objective, the sector is currently focusing more on technological advances in hardware than on the software component. Due to the latest advances, a recent base station consumes significantly less energy for data throughout that is nearly a hundred times higher. But new algorithms that promise energy savings are being developed, including some which simply involve… switching off base stations at certain scheduled times!

This solution may seem radical since switching off a base station in a cellular network means potentially preventing users within a cell from accessing the service. Loutfi Nuaymi, a researcher in telecommunications at IMT Atlantique, is studying this topic, in collaboration with Orange. He explains that, “base stations would only be switched off during low-load times, and in urban areas where there is greater overlap between cells.” In large cities, switching off a base station from 3 to 5am would have almost no consequence, since users are likely to be located in areas covered by at least one other base station, if not more.

Here, the role of algorithms is twofold. First of all, they would manage the switching off of antennas when user demand is lowest (at night) while maintaining sufficient network coverage. Secondly, they would gradually switch the base stations back on when users reconnect (in the morning) and up to peak hours during which all cells must be activated. This technique could prove to be particularly effective in saving energy since base stations currently remain switched on at all times, even during off-peak hours.

Loutfi Nuaymi points out that, “the use of such algorithms for putting base stations on standby mode is taking time to reach operators.” Their reluctance is understandable, since interruptions in service are by their very definition the greatest fear of telecom companies. Today, certain operators could put one base station out of ten on standby mode in dense urban areas in the middle of the night. But the IMT Atlantique researcher is confident in the quality of his work and asserts that it is possible “to go even further, while still ensuring high quality service.

 

Allumer ou éteindre progressivement les stations de base le matin ou le soir en fonction de la demande des usagers est une bonne voix d'économie d'énergie pour les opérateurs.

Gradually switching base stations on or off in the morning and at night according to user demand is an effective energy-saving solution for operators.

 

While energy management algorithms already allow for significant energy savings in 4G networks, their contributions will be even greater over the next five years, with 5G technology leading to the creation of even more cells to manage. The new generation will be based on a large number of femtocells covering areas as small as ten meters — in addition to traditional macrocells with a one-kilometer area of coverage.

Femtocells consume significantly less energy, but given the large number of these cells, it may be advantageous to switch them off when not in use, especially since they are not used as the primary source of data transmission, but rather to support macrocells. Switching them off would not in any way prevent users from accessing the service. Loutfi Nuaymi describes one way this could work. “It could be based on a system in which a user’s device will be detected by the operator when it enters a femtocell. The operator’s energy management algorithm could then calculate whether it is advantageous to switch on the femtocell, by factoring in, for example, the cost of start-up or the availability of the macrocell. If it is not overloaded, there is no reason to switch on the femtocell.

 

What is the right energy mix to power mobile networks?

The value of these algorithms lies in their capacity to calculate cost/benefit ratios according to a model which takes account of a maximum number of parameters. They can therefore provide autonomy, flexibility, and quick performance in base station management. Researchers at IMT Atlantique are building on this decision-making support principle and going a step further than simply determining if base stations should be switched on or put on standby mode. In addition to limiting energy consumption, they are developing other algorithms for optimizing the energy mix used to power the network.

They begin with the observations that renewable sources of energy are less expensive, and if operators equip themselves with solar panels or wind turbines, they must also store the energy produced to make up for periodic variations in sunshine and the sporadic nature of wind. So, how can an operator decide between using stored energy, energy supplied by its own solar or wind facilities, or energy from the traditional grid, which may rely on a varying degree of smart technology?  Loutfi Nuaymi and his team are also working on user cases related to this question and have joined forces with industrial partners to test and develop algorithms which could provide some answers.

One of the very concrete questions operators ask is what size battery is best to use for storage,” says the researcher. “Huge batteries cost as much as what operators save by replacing fossil fuels with renewable energy sources. But if the batteries are too small, they will have storage problems. We’re developing algorithmic tools to help operators make these choices, and determine the right size according to their needs, type of battery used, and production capacity.”

Another question: is it more profitable to equip each base station with its own solar panel or wind turbine, or rather, create an energy farm to supply power to several antennas? The question is still being explored but preliminary findings suggest that no single organization is clearly preferential when it comes to solar panels. Wind turbines, however, are imposing objects which are sometimes refused by neighbors, making it preferential to group them together.

 

Renewable energies at the cost of diminishing quality of service?  

Once this type of constraint has been ruled out, operators must calculate the maximum proportion of renewable energies to include in the energy mix with the least possible consequences on quality of mobile service. Sunshine and wind speed are sporadic by nature. For an operator, a sudden drop in production at a wind or solar power farm could have direct consequences on network availability — no energy means no working base stations.

Loutfi Nuaymi admits that these limitations reveal the complexity of developing algorithms, “We cannot simply consider the cost of the operators’ energy bills. We must also take account of the minimum proportion of renewable energies they are willing to use so that their practices correspond to consumer expectations, the average speed of distribution to satisfy users, etc.”

Results from research in this field show that in most cases, the proportion of renewable energies used in the energy mix can be raised to 40%, with only an 8% drop in quality of service as a result. In off-peak hours, this represents only a slight deterioration and does not have a significant effect on network users’ ability to access the service.

And even if a drastic reduction in quality of service should occur, Loutfi Nuaymi has some solutions. “We have worked on a model for a mobile subscription that delays calls if the network is not available. The idea is based on the principle of overbooking planes. Voluntary subscribers — who, of course, do not have to choose this subscription— accept the risk of the network being temporarily unavailable and, in return, receive financial compensation if it affects their use.

Although this new subscription format is only one possible solution for operators, and is still a long way from becoming reality, it shows how the field of telecommunications may be transformed in response to energy issues. Questions have arisen at times about the outlook for mobile operators. Given the energy consumed by their facilities and the rise of smart grids which make it possible to manage both self-production and the resale of electricity, these digital players could, over time, come to play a significant role in the energy sector.

It is an open-ended question and there is a great deal of debate on the topic,” says Loutfi Nuaymi. “For some, energy is an entirely different line of business, while others see no reason why they should not sell the energy collected.” The controversy could be settled by new scientific studies in which the researcher is participating. “We are already performing technical-economic calculations in order to study operators’ prospects.” The highly-anticipated results could significantly transformation the energy and telecommunications market.

 

This article is part of our dossier Digital technology and energy: inseparable transitions!

 

 

What is net neutrality?

Net neutrality is a legislative shield for preventing digital discrimination. Regularly defended in the media, it ensures equal access to the internet for both citizens and companies. The topic features prominently in a report on the state of the internet published on May 30 by Arcep (the French telecommunications and postal regulatory body). Marc Bourreau, a researcher at Télécom ParisTech, outlines the basics of net neutrality. He explains what it encompasses, the major threats it is facing, and the underlying economic issues.

 

How is net neutrality defined?

Marc Bourreau: Net neutrality refers to the rules requiring internet service providers (ISPs)to treat all data packets in the same manner. It aims to prevent discrimination by the network in terms of content, service, the application or identity of the source of traffic. By content, I mean all the packets included in the IP. This includes the internet, along with social media, news sites, streaming or games platforms, as well as other services such as e-mail.

If it is not respected, in what way may discrimination take place?

MB: An iconic example of discrimination occurred in the United States. An operator was offering a VoIP service similar to Skype. It had decided to block all competing services, including Skype in particular. In concrete terms, this means that customers could not make calls with an application other than the one offered by their operator. To give an equivalent, this would be like a French operator with an on-demand video service, such as Orange, deciding to block Netflix for its customers.

Net neutrality is often discussed in the news. Why is that?

MB: It is the subject of a long legal and political battle in the United States. In 2005, the American Federal Communications Commission (FCC), the regulator for telecoms, the internet and the media, determined that internet access no longer fell within the field of telecommunications from a legal standpoint. The FCC determined that it belonged rather to information services, which led to major consequences. In the name of freedom of information, the FCC had little power to regulate, therefore giving the operators greater flexibility. In 2015, the Obama administration decided to place internet access in the telecommunications category once again. The regulatory power of the FCC was therefore restored and it established three major rules in February 2015.

What are these rules?

MB: An ISP cannot block traffic except in the case of objective traffic management reasons — such as ensuring network security. An ISP cannot degrade a competitor’s service. And an ISP cannot offer pay-to-use fast lanes to access better traffic. In other words, they cannot create an internet highway. The new Trump administration has put net neutrality back in the public eye with the president’s announcement that he intends to roll back these rules. The new director of the FCC, Ajit Pai, appointed by Trump in January, has announced that he intends to reclassify internet service as belonging to the information services category.

Is net neutrality debated to the same extent in Europe?

MB: The European regulation for an open internet, adopted in November 2015, has been in force since April 30, 2016. This regulation establishes the same three rules, with a minor difference in the third rule focusing on internet highways. It uses stricter wording, thereby prohibiting any sort of discrimination. In other words, even if an operator wanted to set up a free internet highway, it could not do so.

Could the threats to net neutrality in the United States have repercussions in Europe?

MB: Not from a legislative point of view. But there could be some indirect consequences. Let’s take a hypothetical case: if American operators introduced internet highways, or charged customers for services to benefit from fast lanes, the major platforms could subscribe to these services. If Netflix had to pay for better access to networks within the United States territory, it could also raise its prices for its subscriptions to offset this cost. And that could indirectly affect European consumers.

The issue of net neutrality seems to more widely debated in the United States than in Europe. How can this be explained? 

MB: Here in Europe, net neutrality is less of a problem than it is in the United States and it is often said that it is because there is greater competition in the internet access market in Europe. I have worked on this topic with a colleague, Romain Lestage. We analyzed the impact of competition on telecoms operators’ temptation to charge content producers. The findings revealed that as market competition increases, operators obviously earn less from consumers and are therefore more likely to make attempts to charge content producers. The greater the competition, the stronger the temptation to deviate from net neutrality.

Do certain digital technologies pose a threat to net neutrality in Europe? 

MB: 5G will raise questions about the relevance of the European legislation, especially in terms of net neutrality. It was designed as technology which could provide services with very different degrees of quality. Some will be very sensitive to server response time, and others to speed. Between communications for connected devices and ultra-HD streaming, the needs are very different. This calls for creating different qualities of network service, which is, in theory, contradictory to net neutrality. Telecoms operators in Europe are using this as an argument for reviewing the regulation, in addition to ensuring that this will lead to increased investments in the sector.

Does net neutrality block investments?

MB: We studied this question with colleagues from the Innovation and Regulation of Digital Services Chair. Our research showed that without net neutrality regulation, fast lanes — internet highways— would lead to an increase in revenue for operators, which they would reinvest in network traffic management to improve service quality. Content providers who subscribe to fast lanes would benefit by offering users higher-quality content. However, these studies also showed that deregulation would lead to the degradation of free traffic lanes, to incite content providers to subscribe to the pay-to-use lanes. Net neutrality legislation is therefore crucial to limiting discrimination against content providers, and consequently, against consumers as well.

 

digital labor

What is Digital Labor?

Are we all working for digital platforms? This is the question posed by a new field of research: digital labor. Web companies use personal data to create considerable value —but what do we get in return? Antonio Casilli, a researcher at Télécom ParisTech and a specialist in digital labor, will give a conference on this topic on June 10 at Futur en Seine in Paris. In the following interview he outlines the reasons for the unbalanced relationship between platforms and users and explains its consequences.

 

What’s hiding behind the term “digital labor?”

Antonio Casilli: First of all, digital labor is a relatively new field of research. It appeared in the late 2000s and explores new ways of creating value on digital platforms. It focuses on the emergence of a new way of working, which is “taskified” and “datafied.” We must define these words in order to understand them better. “Datafied,” because it involves producing data so that digital platforms can derive value. “Taskified,” because in order to produce data effectively, human activity must be standardized and reduced to its smallest unit. This leads to the fragmentation of complex knowledge as well as to the risks of deskilling and the breakdown of traditional jobs.

 

And who exactly performs this work in question?

AC: Micro-workers who are recruited via digital platforms. They are unskilled laborers, the click workers behind the API. But, since this is becoming a widespread practice, we could say anyone who works performs digital labor. And even anyone who is a consumer. Seen from this perspective, anyone who has a Facebook, Twitter, Amazon or YouTube account is a click worker. You and I produce content —videos, photos, comments —and the platforms are interested in the metadata hiding behind this content. Facebook isn’t interested in the content of the photos you take, for example. Instead, it is interested in where and when the photo was taken, what brand of camera was used. And you produce this data in a taskified manner since all it requires is clicking on a digital interface. This is a form of unpaid digital labor since you do not receive any financial compensation for your work. But it is work nonetheless: it is a source of value which is tracked, measured, evaluated and contractually defined by the terms of use of the platforms.

 

Is there digital labor which is not done for free?

AC: Yes, that is the other category included in digital labor: micro-paid work. People who are paid to click on interfaces all day long and perform very simple tasks. These crowdworkers are mainly located in India, the Philippines, or in developing countries where average wages are low. They receive a handful of cents for each click.

 

How do platforms benefit from this labor?

AC: It helps them make their algorithms perform better. Amazon, for example, has a micro-work service called Amazon Mechanical Turk, which is almost certainly the best-known micro-work platform in the world. Their algorithms for recommending purchases, for example, need to practice on large, high-quality databases in order to be effective. Crowdworkers are paid to sort, annotate and label images of products proposed by Amazon. They also extract textual information for customers, translate comments to improve additional purchase recommendations in other languages, write product descriptions etc.

I’ve cited Amazon but it is not the only example.  All the digital giants have micro-work services. Microsoft uses UHRS, Google has its EWOQ service etc. IBM’s artificial intelligence, Watson, which has been presented as one of its greatest successes in this field, relies on MightyAI. This company pays micro-workers to train Watson, and its motto is “Train data as a service.” Micro-workers help train visual recognition algorithms by indicating elements in images, such as the sky, clouds, mountains etc. This is a very widespread practice. We must not forget that behind all artificial intelligence, there are, first and foremost, human beings. And these human beings are, above all, workers whose rights and working conditions must be respected.

Workers are paid a few cents for tasks proposed on Amazon Mechanical Turk, which includes such repetitive tasks as “answer a questionnaire about a film script.”

 

This form of digital labor is a little different from the kind I carry out because it involves more technical tasks.   

AC:  No, quite the contrary. They perform simple tasks that do not require expert knowledge. Let’s be clear: work carried out by micro-workers and crowds of anonymous users via platforms is not the ‘noble’ work of IT experts, engineers, and hackers. Rather, this labor puts downward pressure on wages and working conditions for this portion of the workforce. The risk for digital engineers today is not being replaced by robots, but rather having their jobs outsourced to Kenya or Nigeria where they will be done by code micro-workers recruited by new companies like Andela, a start-up backed by Mark Zuckerberg. It must be understood that micro-work does not rely on a complex set of knowledge. Instead it can be described as: write a line, transcribe a word, enter a number, label an image. And above all, keep clicking away.

 

Can I detect the influence of these clicks as a user?

 AC: Crowdworkers hired by genuine “click farms” can also be mobilized to watch videos, make comments or “like” something. This is often what happens during big advertising or political campaigns. Companies or parties have a budget and they delegate the digital campaign to a company, which in turn outsources it to a service provider. And the end result is two people in an office somewhere, stuck with the unattainable goal of getting one million users to engage with a tweet. Because this is impossible, they use their budget to pay crowdworkers to generate fake clicks. This is also how fake news spreads to such a great extent, backed by ill-intentioned firms who pay a fake audience. Incidentally, this is the focus of the Profane research project I am leading at Télelécom ParisTech with Benjamin Loveluck and other French and Belgian colleagues.

 

But don’t the platforms fight against these kinds of practices?

AC: Not only do they not fight against these practices, but they have incorporated them in their business models. Social media messages with a large number of likes or comments make other users more likely to interact and generate organic traffic, thereby consolidating the platform’s user base. On top of that, platforms also make use of these practices through subcontractor chains. When you carry out a sponsored campaign on Facebook or Twitter, you can define your target as clearly as you like, but you will always end up with clicks generated by micro-workers.

 

But if these crowdworkers are paid to like posts or make comments, doesn’t that raise questions about tasks carried out by traditional users?

AC: That is the crux of the issue. From the platform’s perspective, there is no difference between me and a click-worker paid by the micro-task. Both of our likes have the same financial significance. This is why we use the term digital labor to describe these two different scenarios. And it’s also the reason why Facebook is facing a class-action lawsuit filed with the Court of Justice of the European Union representing 25,000 users. They demand €500 per person for all the data they have produced. Google has also faced a claim for its Recaptcha, from users who sought to be re-classified as employees of the Mountain View firm. Recaptcha was a service which required users to confirm that they were not robots by identifying difficult-to-read words. The data collected was used to improve Google Books’ text recognition algorithms in order to digitize books. The claim was not successful, but it raised public awareness of the notion of digital labor. And most importantly, it was a wake-up call for Google, who quickly abandoned the Recaptcha system.

 

Could traditional users be paid for the data they provide?

AC: Since both micro-workers, who are paid a few cents for every click, and ordinary users perform the same sort of productive activity, this is a legitimate question to ask.  On June 1, Microsoft decided to reward Bing users with vouchers in order to convince them to use their search engine instead of Google. It is possible for a platform to have a negative price, meaning that it pays users to use the platform. The question is how to determine at what point this sort of practice is akin to a wage, and if the wage approach is both the best solution from a political viewpoint and the most socially viable. This is where we get into the classic questions posed by the sociology of labor. They can also relate to Uber drivers, who make a living from the application and whose data is used to train driverless cars. Intermediary bodies and public authorities have an important role to play in this context. There are initiatives, such as one led by the IG Metal union in Germany, which strive to gain recognition for micro-work and establish collective negotiations to assert the rights of clickworkers, and more generally, all platform workers.

 

On a broader level, we could ask what a digital platform really is.

AC: In my opinion, it would be better if we acknowledged the contractual nature of the relationship between a platform and its users. The general terms of use should be renamed “Contracts to use data and tasks provided by humans for commercial purposes,” if the aim is commercial. Because this is what all platforms have in common: extracting value from data and deciding who has the right to use it.

 

 

Dark Matter

Even without dark matter Xenon1T is a success

Xenon1T is the largest detector of dark matter in the world. Unveiled in 2015, it searches for this invisible material — which is five times more abundant in the universe than ordinary matter — from the Gran Sasso laboratory in Italy, buried under a mountain. In May 2017, an international collaboration of 130 scientists published the first observations made by the instrument. Dominique Thers, the coordinator of the experiment in France and a researcher at Subatech*, explains the importance of these initial results from Xenon1T. He gives us an overview of this cutting-edge research, which could unlock the secrets of the universe. 

[divider style=”dashed” top=”20″ bottom=”20″]
Learn more about Xenon1T by reading our article about the experiment.
[divider style=”dashed” top=”20″ bottom=”20″]

What did the Xenon1T collaboration work on between the inauguration a year and a half ago and the first results published last month?

Dominique Thers: We spent the better part of a year organizing the validation of instruments to make sure that they worked properly. The entire collaboration worked on this qualification and calibration phase between fall 2015 and fall 2016. This phase can be quite long and it’s difficult to predict how long it will take in advance. We were very satisfied to finish it in a year — a short time for such a large-scale experiment like Xenon1T.

 

So you had to wait one year before launching the first real experiment?

DT: That’s right. The first observations were launched in early December 2016. We had allowed the ton of xenon to be exposed to potential dark matter particles for exactly 34.2 days. In reality, the actual time was a bit longer, since we have to recalibrate the instruments regularly and no data is recorded during these times. This period of exposure ended on January 18, when three high-magnitude earthquakes were recorded near Gran Sasso. Due to mechanical turbulence, the instruments had to be serviced over the course of a week, and we decided at that time to proceed to what we call “unblinding.”

 

Does that mean you only discovered what you had been recording with the experiment once it was finished, rather than in real time?

DT: Yes, this is in line with the logic of our community. We perform data analysis which is independent from its acquisition. This allows us to limit bias to the maximum in our analysis, as this could occur when we stop an observation period to verify whether or not there has been an interaction between the xenon and dark matter. When we have reached a significant period of time we determine to be satisfactory, we stop the experiment and look at the data. The analysis portion is prepared in advance and everyone is ready for this moment in general. The earthquake occurred very near to the scheduled end date, so we preferred to stop the measurements then.

 

The results did not reveal interactions between the xenon and dark matter particles, which would have represented a first direct observation of dark matter. Does this mean that the collaboration has been a failure?

DT: Not at all! It’s important to understand that there is fierce competition around the globe to increase the volume of ordinary material exposed to dark matter. With a ton of xenon, Xenon1T is the world’s largest experiment, and potentially, the most likely to observe dark matter. It was out of the question to continue over a long period of time without first confirming that the experiment had reached an unprecedented level of sensitivity. With this first publication, we have proven that Xenon1T is up to the challenge. However, Xenon1T will only reach its maximum sensitivity in sessions lasting 18 to 24 months, so it holds great promise.

 

How does this sensibility work? Is Xenon1T really more sensitive than other competing experiments?

DT: A very simple but symbolic approach is to say that the more the detector is exposed to dark matter, the more likely it is to record an interaction between it and ordinary matter. We therefore have a law proportional to time. So, it’s clear to see why, after obtaining this world record in just one month, we are optimistic about the capacities of Xenon1T over an 18 to 24-month period. But we cannot go further than this point, since we would encounter an excessive level of background noise, which would hide potential observations of dark matter with Xenon1T.

 

L'expérience Xenon1T dans le laboratoire du Gran Sasso, en Italie. À gauche, le réservoir de xénon enfermé dans un caisson protecteur. À droite, les locaux box abritent les instruments d'analyse et de contrôle de l'expérience.

The Xenon1T experiment in the Gran Sasso laboratory in Italy. On the left, the xenon reservoir enclosed in protective casing. On the right, rooms housing instruments used for analysis and control in the experiment.

 

So it was more important for the Xenon1T partnership to confirm the superiority of its experiment than to directly carry out an 18 to 24-month period of exposure which may have been more conclusive?  

DT: This enabled us to confirm the quality of Xenon1T, both in terms of the scientific community and governments which support us and need to justify the investments they make. This was a way to respond to the financial and human resources provided by our partners, collaborators, and ourselves. And we do not necessarily control observation time at our level. It also depends on results from competing experiments. The idea is not to keep our eyes closed for 18 months without concerning ourselves with what is happening elsewhere. If another experiment assures that it has found traces of dark matter in an energy field where we have visibility with Xenon1T, we can stop the acquisitions in order to confirm or disprove these results. This first observation enables us to position ourselves as the best-placed authority to settle any scientific disagreements.

 

Your relationship with other experiments seems a bit unusual: you are all in competition with one another but you also need each other.

DT: It is very important to have several sites on Earth which can report a direct observation of dark matter. Naturally, we hope that Xenon1T will be the first to do so. But even if it is, we’ll still need other sites to demonstrate that the dark matter observed in Italy is the same as that observed elsewhere. But this does not mean that we cannot all improve the sensitivity of our individual experiments in order to maintain or recover the leading role in this research.

 

So Xenon1T is already looking to the future?

DT: We are already preparing the next experiment and determining what Xenon1T will be like in 2019 or 2020. The idea is to gain an order of magnitude in the mass of ordinary material exposed to potential dark matter particles with XENONnT. We are thinking of developing an instrument which will contain ten tons of xenon. In this respect we are competing with the American LZ experiment and the Chinese PandaX collaboration. They also hope to work with several tons of xenon in a few years’ time. By then, we may have already observed dark matter…

Subatech is a joint research unit between IMT Atlantique, CNRS and Université de Nantes.

 

Attacks, Virus informatiques, logiciels malveillants, malware, cyberattaque, Hervé Debar, Télécom SudParis, Malware, Cybersecurity

Viruses and malware: are we protecting ourselves adequately?

Cybersecurity incidents are increasingly gaining public attention. They are frequently mentioned in the media and discussed by specialists, such as Guillame Poupard, Director General of the French Information Security Agency. This attests to the fact that these digital incidents have an increasingly significant impact on our daily lives. Questions therefore arise about how we are protecting our digital activities, and if this protection is adequate.  The publicity surrounding security incidents may, at first glance, lead us to believe that we are not doing enough.

 

A look at the current situation

Let us first take a look at the progression of software vulnerabilities since 2001, as illustrated by the National Vulnerability Database (NVD), the reference site of the American National Institute of Standards and Technology (NIST).

 

malware, virus, cyberattack, cybersecurity

Distribution of vulnerabilities to attacks, rated by severity of vulnerability over a period of time. CC BY

 

Upon an analysis of the distribution of vulnerabilities to computer-related attacks, as published by the American National Institute of Standards and Technology (NIST) in visualizations on the National Vulnerability Database, we observe that since 2005, there has not been a significant increase in the number of vulnerabilities published each year. The distribution of risk levels (high, medium, low) has also remained relatively steady. Nevertheless, it is possible that the situation may be different in 2017, since, just halfway through the year, we have already reached publication levels similar to those of 2012.

It should be noted, however, that the growing number of vulnerabilities published in comparison to before 2005 is also partially due to a greater exposure of systems and software to attempts to compromise and external audits. For example, Google has implemented Google Project Zero, which specifically searches for vulnerabilities in programs and makes them public. It is therefore natural that more discoveries are made.

There is also an increasing number of objects, the much-discussed Internet of Things, which use embedded software, and therefore present vulnerabilities. The recent example of the “Mirai” network demonstrates the vulnerability of these environments which account for a growing portion of our digital activities. Therefore, the rise in the number of vulnerabilities published simply represents the increase in our digital activities.

 

What about the attacks?

The publicity surrounding attacks is not directly connected to the number of vulnerabilities, even if it is involved. The notion of vulnerability does not directly express the impact that this vulnerability may have on our lives. Indeed, the effect of the malicious code, WannaCry, which affected the British health system by disabling certain hospitals and emergency services, can be viewed as a significant step in the harmfulness of malicious codes. This attack led to either deaths or delayed care on an unprecedented scale.

It is always easy to say, in hindsight, that an event was foreseeable. And yet, it must be acknowledged that the use of “old” tools (Windows XP, SMBv1) in these vital systems is problematic. In the digital world, fifteen years represents three or even four generations of operating systems, unlike in the physical world, where we can have equipment dating from 20 or 30 years ago, if not even longer. Who could imagine a car being obsolete (to the point of no longer being usable) after five years? This major difference in evaluating time, which is deeply engrained in our current way of life, is largely responsible for the success and impact of the attacks we are experiencing today.

It should also be noted that in terms of both scale and impact, digital attacks are not new. In the past, worms such as CodeRed in 2001 and Slammer in 2003, also infected a number of important machines, making the internet unusable for some time. The only difference was that at the time of these attacks, critical infrastructures were less dependent on a permanent internet connection, therefore limiting the impact to the digital world alone.

The most critical attacks, however, are not those in which the attackers benefit the most. In the  Canadian Bitcoin Highjack in 2014, for example, attackers hijacked this virtual currency for a direct financial gain without disturbing the bitcoin network, while other similar attacks on routing in 2008 made the network largely unavailable without any financial gain.

So, where does all this leave us in terms of the adequacy of our digital protection?

There is no question that outstanding progress has been made in protecting information systems over the past several years. The detection of an increasing number of vulnerabilities, combined with progressively shorter periods between updates, is continually strengthening the reliability of digital services. The automation of the update process for individuals, which concerns operating systems as well as browsers, applications, telephones and tablets, has helped limit exposure to vulnerabilities.

At the same time, in the business world we have witnessed a shift towards a real understanding of the risks involved in digital uses. This, along with the introduction of technical tools and resources for training and certification, could help increase all users’ general awareness of both the risks and opportunities presented by digital technology.

 

How can we continue to reduce the risks?

After working in this field for twenty-five years, and though we must remain humble in response to the risks we face and will continue to face, I remain optimistic about the possibilities of strengthening our confidence in the digital world. Nevertheless, it appears necessary to support users in their digital activities in order to help them understand how these services work and the associated risks. ANSSI’s publication of measures for a healthy network for personal and business use is an important example of this need for information and training which will help all individuals make conscious, appropriate choices when it comes to digital use.

Another aspect, which is more oriented towards developers and service providers, is increasing the modularity of our systems. This will allow us to control access to our digital systems, make them simple to configure, and easier to update. In this way, we will continue to reduce our exposure to the risk of a computer-related attack while using our digital tools to an ever-greater extent.

Hervé Debar, Head of the Telecommunications Networks and Services department at Télécom SudParis, Télécom SudParis – Institut Mines-Télécom, Université Paris-Saclay

The original version of this article was published in French on The Conversation France.

 

Big Data, TeraLab, Anne-Sophie Taillandier

What is Big Data?

On the occasion of the Big Data Congress in Paris, which was held on 6 and 7 March at the Palais des Congrès, Anne-Sophie Taillandier, director of TeraLab, examines this digital concept which plays a leading role in research and industry.

 

Big Data is a key element in the history of data storage. It has driven an industrial revolution and is a concept inherent to 21st century research. The term first appeared in 1997, and initially described the problem of an amount of data that was too big to be processed by computer systems. These systems have greatly progressed since, and have transformed the problem into an opportunity. We talked with Anne-Sophie Taillandier, director of the Big Data platform TeraLab about what Big Data means today.

 

What is the definition of Big Data?

Anne-Sophie Taillandier: Big Data… it’s a big question. Our society, companies, and institutions have produced an enormous amount of data over the last few years. This growth has been favored by the growing number of sources (sensors, web, after-sales service, etc.). What is more, the capacities of computers have increased tenfold. We are now able to process these large volumes of data.

These data are very varied, they may be text, measurements, images, videos, or sound. They are multimodal, that is, able to be combined in several ways. They contain rich information and are worth using to optimize existing products and/or services, or to invent new approaches. In any case, it is not the quantity of the quantity of the data that is important. However, Big Data enables us to cross-reference this information with open data, and can therefore provide us with relevant information. Finally, I prefer to speak of data innovation rather than Big Data – it is more appropriate.

 

Who are the main actors and beneficiaries of Big Data?

AST: Everyone is an actor, and everyone can benefit from Big Data. All industry sectors (mobility, transport, energy, geospatial data, insurance, etc.) but also the health sector. Citizens are especially concerned by the health sector. Research is a key factor in Big Data and an essential partner to industry. The capacities of machines now allow us to establish new algorithms for processing big quantities of data. The algorithms are progressing quickly, and we are constantly pushing the boundaries.

Data security and governance are also very important. Connected objects, for example, accumulate user data. This raises the question of securing this information. Where do the data go? But also, what am I allowed to use them for? Depending on the case, anonymization might be appropriate. These are the types of questions facing the Big Data stakeholders.

 

How can society and companies use Big Data?

AST: Innovation in data helps us to develop new products and services, and to optimize already existing ones. Take the example of the automobile. Vehicles generate data allowing us to optimize maintenance. The data accumulated from several vehicles can also be useful in manufacturing the next vehicle, they can assist in the design process. These same data may also enable us to offer new services to passengers, professionals, suppliers, etc. Another important field is health. E-health promotes better healthcare follow-up and may also improve practices, making them better-adapted to the patient.

 

What technology is used to process Big Data?

AST: The technology allowing us to process data is highly varied. There are algorithmic systems, such as Machine Learning and Deep Learning. There is also artificial intelligence. Then, there are also the frameworks of open source software, or paid solutions. It is a very broad field. With Big Data, companies can open up their data in an aggregated form to develop new services. Finally, technology is advancing very quickly, and is constantly influencing companies’ strategic decisions.

Quantum computer, Romain Alléaume

What is a Quantum Computer?

The use of quantum logic for computing promises a radical change in the way we process information. The calculating power of a quantum computer could surpass that of today’s biggest supercomputers within ten years. Romain Alléaume, a researcher in quantum information at Télécom ParisTech, helps us to understand how they work.

 

Is the nature of the information in a quantum computer different?

Romain Alléaume: In classical computer science, information is encoded in bits: 1 or 0. It is not quite the same in quantum computing, where information is encoded on what, we refer to as “quantum bits” or qubits. And there is a big difference between the two. A standard bit exists in one of two states, either 0 or 1. A qubit can exist in any superposition of these two states, and can therefore have many more than two values.

There is a stark difference between using several bits or qubits. While n standard bits can only take a value among 2n possibilities, n qubits can take on any combination of these 2n states. For example, 5 bits take a value among 32 possibilities: 00000, 00001… right up to 11111. 5 qubits can take on any linear superposition of the previous 32 states, which is more than one billion states. This phenomenal expansion in the size of the space of accessible states is what explains the quantum computer’s greater computing capacity.

 

Concretely, what does a qubit look like?

RA:  Concretely, we can encode a qubit on any quantum system with two states. The most favourable experimental systems are the ones we know how to manipulate precisely. This is for instance the case with the energy levels of electrons in an atom. In quantum mechanics, the energy of an electron “trapped” around an atomic nucleus may take different values, and these energy levels take on specific “quantified” values, hence the name, quantum mechanics. We can call the first two energy levels of the atom 0 and 1: 0 corresponding to the lowest level of energy and 1 to a higher level of energy, known as the “excited state”. We can then encode a quantum bit by putting the atom in the 0 or in the 1 state, but also in any superposition (linear combination) of the 0 state and the 1 state.

To create good qubits, we have to find systems such that the quantum information remains stable over time. In practice, creating very good qubits is an experimental feat: atoms tend to interact with their surroundings and lose their information. We call this phenomenon decoherence. To avoid decoherence, we have to carefully protect the qubits, for example by putting them in very low temperature conditions.

 

What type of problems does the quantum computer solve efficiently?

RA: It exponentially increases the speed with which we can solve “promise problems”, that is, problems with a defined structure, where we know the shape of the solutions we are looking for. However, for reversing a directory for example, the quantum computer has only been proven to speed up the process by a square-root factor, compared with a regular computer. There is an increase, but not a spectacular one.

It is important to understand that the quantum computer is not magic and cannot accelerate any computational problem. In particular, one should not expect quantum computers to replace classical computers. Its main scope will probably be related to simulating quantum systems that cannot be simulated with standard computers. This will involve simulating chemical reactions or super conductivity, etc. While quantum simulators are likely to be the first concrete application of quantum computing, we know about quantum algorithms that can be applied to solve complex optimization problems, or to accelerate computations in machine learning. We can expect to see quantum processors used as co-processors, to accelerate specific computational tasks.

 

What can be the impact of the advent of large quantum computers?

RA: The construction of large quantum computers will moreover enable us to break most of the cryptography that is used today on the Internet. The advent of large quantum computer is unlikely to occur in the next 10 years. Yet, as these data are stored for years, even tens of years, we need to start thinking about new cryptographic techniques that will be resistant to the quantum computer.

Read the blog post: Confidential communications and quantum physics

 

When will the quantum computer compete with classical supercomputers?

RA: Even more than in classical computing, quantum computing requires error-correcting codes to improve the quality of the information coded on qubits, and to be scaled up. We can currently build a quantum computer with just over a dozen qubits, and we are beginning to develop small quantum computers which work with error-correcting codes. We estimate that a quantum computer must have 50 qubits in order to outperform a supercomputer, and solve problems which are currently beyond reach. In terms of time, we are not far away. Probably five years for this important step, often referred to as “quantum supremacy”.