VOC, Volatile organic compound

What is a volatile organic compound (VOC)?

Pollution in urban areas is a major public health issue. While peaks in the concentration of fine particles often make the news, they are not the only urban pollutants. Volatile organic compounds, or VOC, also present a hazard. Some are carcinogenic, while others react in the atmosphere, contributing to the formation of secondary pollutants such as ozone or secondary aerosols—which are very small particles. Nadine Locoge, researcher at IMT Lille Douai, reviews the basics about VOCs, reminding us that they are not only present in outdoor air.  

 

What is a volatile organic compound (VOC)?

Nadine Locoge: It is a chemical composed primarily of carbon and hydrogen. Other atoms can be integrated into this molecule in variable amounts, such as nitrogen, sulfur, etc. All VOCs are volatile at ambient temperature. This is what differentiates them from other pollutants like fine particles, which are in condensed form at ambient temperature.

Read more on I’MTech: What are fine particles?

How do they form?

NL: On a global scale, nature is still the primary source of VOCs. Vegetation, typically forests, produce 90% of the earth’s total emissions. But in the urban setting, this trend is reversed, and anthropogenic sources are more prominent. In cities, the main sources of emissions are automobiles, both from exhaust and the evaporation of fuel, and various heating methods—oil, gas wood… Manufacturers are also major sources of VOC emissions.

Are natural VOCs the same as those produced by humans?

NL: No, in general they are not part of the same chemical families. They have different structures, which implies different consequences. The natural types produce a lot of isoprene and terpenes, which are often used for their fragrant properties. Anthropogenic activities, on the other hand, produce aromatic compounds, such as benzene, which is highly carcinogenic.

Why is it important to measure the concentrations of VOCs in the air?

NL: There are several reasons. First, because some have direct impacts on our health. For example, the concentrations of benzene in the outside air are regulated. They must not exceed an annual average of 5 micrograms per cubic meter. Also, some VOCS react once they are in the air, forming other pollutants. For example, they can generate aerosols—nanoparticles—after interacting with other reactive species. VOCs can also react with atmospheric oxidants and cause the formation of ozone.

Are VOCs only found in outside air?

NL: No, in fact these species are particularly present in indoor air. All the studies at both the national and European level show that VOC concentrations in indoor air in buildings are higher than outside. These are not necessarily the same compounds in these two cases, yet they pose similar risks. One of the emblematic indoor air pollutants is formaldehyde, which is carcinogenic.

There are several sources of VOCs in indoor air: outdoor air due to the renewal of indoor air, for example, but construction materials and furniture are particularly significant sources of VOC emissions.  Regulation in this area is progressing, particularly through labels on construction materials that take this aspect into account. The legislative aspect is crucial as buildings become more energy efficient, since this often means less air is exchanged in order to retain heat, and therefore the indoor air is renewed less frequently.

How can we fight VOC emissions?

NL: Inside, in addition to using materials with the least possible emissions and ventilating rooms as recommended by the ADEME, there are devices that can trap and destroy VOCs. The principle is either to trap them in an irreversible manner, or to cause them to react in order to destroy them—or more precisely, transform them into species that do not affect our health, ideally into carbon dioxide and water. These techniques are widely used in industrial environments, where the concentrations of emissions are relatively significant, and the chemical species are not very diverse. But in indoor environments VOCs are more varied, with lower concentrations. They are therefore harder to treat. In addition, the use of these treatment systems remains controversial because if the chemical processes used are not optimized and adapted to the target species, they can cause chemical reactions that generate secondary compounds that are even more hazardous to human health than the primary species.

Is it possible to decrease VOC concentrations in the outside air?

NL: The measures in this area are primarily regulatory and are aimed at reducing emissions. Exhaust fumes from automobiles, for example, are regulated in terms of emissions. For the sources associated with heating, the requirements vary greatly depending on whether the heating is collective or individual. In general, the methods are ranked according to the amount of emissions. Minimum performance requirements are imposed to optimize combustion and therefore lead to less VOCs being produced, and emission limit values have been set for certain pollutants (including VOCs). In general, emission-reduction targets are set at the international and national level and are then broken down by industry.

In terms of ambient concentrations, there have been some experiments in treating pollutants—including volatile organic compounds—like in the tunnel in Brussels where the walls and ceiling were covered with a cement-based photocatalytic coating. Yet the results from these tests have not been convincing. It is important to keep in mind that in ambient air, the sources of VOCs are numerous and diffuse. It is therefore difficult to lower the concentrations. The best method is still to act to directly reduce the quantity of emissions.

 

 

eco-material, Gwenn Le Saout, IMT Mines Alès

What is an eco-material?

Reducing the environmental footprint of human constructions is one of the major issues facing the ecological transition. Achieving this goal requires the use of eco-materials. Gwenn Le Saout, a researcher in materials at IMT Mines Alès, explains what these materials are, their advantages and the remaining constraints that prevent their large-scale use.

 

How would you define an eco-material?

Gwenn Le Saout: An eco-material is an alternative to a traditional material for a specific use. It has a lower environmental impact than the traditional material it replaces, yet it maintains similar properties, particularly in terms of durability. Eco-materials are used within a general eco-construction approach aimed at reducing the structures’ environmental footprint.

Can you give us an example of an eco-material?

GLS: Cement has a significant COfootprint. Cement eco-materials are therefore being developed in which part of the cement is replaced by foundry slags. Slags are byproduct materials from steel processes that are generated when metal is melted. So, interestingly, we now call slags “byproducts”, whereas they used to be seen as waste! This proves that there is a growing interest in recovering them, partly for the cement industry.

Since concrete is one of the primary construction materials, are there any forms of eco-concrete?

GLS: Eco-concrete is a major issue in eco-construction, and a lot of scientific work has been carried out to support its development. Producing concrete requires aggregates—often sand from mining operations. These natural aggregates can be replaced by aggregates from demolition concrete which can thus be reused. Another way of producing eco-concrete is by using mud. Nothing revolutionary here, but this process is gaining in popularity due to a greater awareness of materials’ environmental footprint.

Are all materials destined to be replaced by eco-materials?

GLS: No, the goal of eco-materials is not to replace all existing materials. Rather, the aim is to target uses for which materials with a low environmental impact can be used. For example, it is completely possible to build a house using concrete containing demolition aggregates. However, this would not be a wise choice for building a bridge, since the materials do not have exactly the same properties and different expertise is required.

What are the limitations of eco-materials?

GLS: The key point is their durability. For traditional concrete and materials, manufacturers have several decades of feedback. For eco-materials, and particularly eco-concrete, there is less knowledge about their durability. Many question marks remain concerning their behavior over time. This is such an important aspect of the research: finding formulations that can ensure good long-term behavior and characterizing the existing eco-materials to predict their durability.  At The Civil Engineering Institute (IGC), we worked on the national RECYBETON from 2014 to 2016 with Lafarge-Holcim, and were able to provide demonstrators for testing the use of recycled aggregates.

How can industrial stakeholders be convinced to switch to these eco-materials?

GLS: The main advantage is economic. Transporting and storing demolition materials is expensive. In the city, reusing demolition materials in the construction of new buildings therefore represents an interesting opportunity because it would reduce the transport and storage costs. We also participated in the ANR project ECOREB with IGC on this topic to find solutions for recycling concrete. We must also keep in mind that Europe has imposed an obligation to reuse materials: 70% of demolition waste must be recycled. Switching to eco-materials using demolition products therefore offers a way for companies to comply with this directive.

Photomécanique Jean-José Orteu

What is photomechanics?

How can we measure the deformation of a stratospheric balloon composed only of an envelope a few micrometers thick? It is impossible to attach a sensor to it because this would distort the envelope’s behavior… Photomechanics, which refers to measurement methods using images and computer analysis, makes it possible to measure this deformation or a material’s temperature without making any contact. Jean-José Orteu, a researcher in artificial vision for photomechanics, control and monitoring at IMT Mines Albi, explains the principles behind photomechanical methods, which are used in the aeronautics, automotive and nuclear industries.

 

What is photomechanics?

Jean-José Orteu: We can define photomechanics as the application of optical measurements to experimental mechanics and, more specifically, the study of the behavior of materials and structures. The techniques that have been developed are used to measure materials’ deformation or temperature.

Photomechanics is a relatively young discipline, roughly 30 years old. It is based on around ten different measurement techniques that can be applied to both a nanoscale and the dimensions of an airplane and to both static and dynamic systems. Among these different techniques, two are primarily used: the digital image correlation (DIC) method for measuring deformations, and the infrared thermography method for measuring temperatures.

 

How are these two techniques implemented?

JJO: For the DIC, we position one or several cameras in front of a material: only one for a planar material that undergoes in-plane deformation and several for the measurement of a three-dimensional material. The cameras film the material as it is deformed under the effect of mechanical stress and/or heat. Once the images are taken, the deformation of the material is calculated based on the deformation of the images obtained: if the material is deformed, so is the image.  This deformation is measured using computer processing and is extrapolated to the material.

This is referred to as the white light method because the material is lit by an incoherent light from standard lighting. Other more complex photomechanical techniques require the use of a laser to light the material: these are referred to as interferometric methods.  They are useful for very fine measurements of displacements in the micrometer or nanometer range.

The second most frequently used technique in photomechanics is infrared thermography, which is used to measure temperatures. This uses the same process as the DIC technique, with the initial acquisition of infrared images followed by the computer processing of these images to determine the temperature of the observed material. Calculating a temperature using an image is no easy task. The material’s thermo-optical properties must be taken into account as well as the measuring environment.

With all of these techniques, we can analyze the dynamic evolution of the distortion or temperature. The material is therefore analyzed both spatially and temporally.

photomécanique Jean-José Orteu

photomechanics, Jean-José Orteu

Stereo-DIC measurement of the deformation field of a sheet of metal shaped using incremental forming

 

What type of camera is used for these measurement methods?

JJO: While camera resolution influences the quality and precision of the measurements, a traditional camera can already obtain good results. However, to study very fast phenomena, such as the impact of a bird in flight on an aircraft fuselage, very fast cameras are needed, which can take 10,000, 100,000 or even 1,000,000 images per second! In addition, for temperature measurements, infrared-sensitive cameras must be used.

 

What is the value of optical measurements as compared to other measurement methods?

JJO: Traditionally, a strain gauge is used to measure the deformation of a material. A strain gauge is a sensor that is glued or welded to the surface of the material to provide an isolated indication of its deformation. This gauge must be as nonintrusive as possible and must not alter the object’s behavior. The same problem exists for temperature measurements. Traditional techniques use a thermocouple, a temperature sensor that is also welded to the surface of the material. When the sensors are very small compared to the material, they are nonintrusive and therefore do not pose a problem. Yet for some applications, the use of contact sensors is impossible. For example, at IMT Mines Albi we worked on the deformation of a parachute when it inflates. But the canvas contained a lining only a few micrometers thick. A gauge would have been difficult to glue to it and would have greatly disrupted the material’s behavior. In this type of situation, photomechanics is indispensable, since no contact is required with the object.

Finally, both the gauge and the thermocouple offer only isolated information, only at the spot where the sensor is glued. You won’t get any information concerning a spot only ten centimeters away from the sensor. However, the problem in mechanics is that, most of the time, we do not know exactly where we will need information about deformation or the temperature. The risk is therefore that of not welding or gluing the sensors in the spots where the deformation or temperature measurement is the most relevant. The optical methods also offer field information: a deformation field or temperature field.  We can therefore view the material’s entire surface, including the areas where the deformation or temperature gradient is more significant.

 

Photomécanique Jean-José Orteu

Photomécanique Jean-José Orteu

phtomechanics

Top, a material instrumented with gauges (only 6 measurement points). Middle, the same material to which speckled paint has been added to implement the optical DIC technique. Bottom, the deformation field measured via DIC (hundreds of measurement points).

What are the limitations of photomechanics?

JJO: In the beginning, photomechanical methods based on the use of cameras could not measure surface deformations. But over the last five or six years, an entire segment of photomechanics has begun to focus on deformations within objects.  These new techniques require the use of specific sensors, tomographs. They make it possible to take X-ray images of the materials, which reveal core deformations after computer processing. The large volumes of data this technique generates raise big data issues.

In terms of temperature, the core measurement without contact is more complicated. We recently defended a thesis at IMT Mines Albi on a method that makes it possible to measure the temperature in a material’s core based on the fluorescence phenomenon. The results are very promising, but the research must be continued to obtain industrial applications.

In addition, despite its many advantages, photomechanics has not yet fully replaced strain gauges and thermocouples. In fact, optical measurement techniques have not yet been standardized. Typically, when measuring a deformation with a gauge, the method of measurement is standardized: what type of gauge is it?  How should it be attached to the material? A precise methodology must be followed. In photomechanics, whether in the choice of camera and its calibration and position, or the image processing in the second phase, everything is variable, and everyone creates his or her own method. In terms of certification, some industrial stakeholders therefore remain hesitant about the use of these methods.

There is also still work to be done in assessing measurement uncertainties. The image acquisition chain and processing procedure can be complex, and errors can distort the measurements in any stage. How can we ensure there are as few errors as possible? How can we assess measurement uncertainties? Research in this area is underway. The long-term goal is to be able to systematically provide a measurement field with a range of associated uncertainties. Today, this assessment remains complicated, especially for non-experts.

Nevertheless, despite these difficulties, the major industries that need to define the behavior of materials, such as the automotive, aeronautics and nuclear industries, all use photomechanics. And although progress must be made in assessing measurement uncertainties and establishing standardization, the results these optical methods achieve are often of better quality than those of traditional methods.

 

smart grid

What is a smart grid?

The driving force behind the modernization of the electrical network, the smart grid is full of promise. It will mean savings for consumers and energy companies alike. It terms of the environment, it provides a solution for developing renewable energies. Hossam Afifi, a researcher in networks at Télécom SudParis gives us a behind-the-scenes look at the smart grid.

 

What is the purpose of a smart grid?

Hossam Afifi: The idea behind a smart grid is to create savings by using a more intelligent electric network. The final objective is to avoid wasting energy, ensuring that each watt produced is used. We must first understand that today, the network is often run by electro-mechanical equipment that dates back to the 1960s. For the sake of simplicity, we will say it is controlled by engineers who use switches to remotely turn on or off the means of production and supply neighborhoods with energy. With the smart grid, all these tasks will be computerized. This is done in two steps. First, by introducing a measuring capacity using connected sensors and the Internet of Things. The control aspect is then added through machine learning to intelligently run the networks based on the data obtained via sensors, without any human intervention.

 

Can you give us some concrete examples of what the smart grid can do?

HA: One concrete example is the reduction of energy bills for cities, municipal authorities and hence local taxes, and major infrastructures. A more noticeable example is the architectural projects for buildings that feature both offices and housing, which are aimed at evening out the amount of power consumed over the course of the day and limiting excessive peaks in energy consumption during high-demand hours. The smart grid rethinks the way cities are designed. For example, business areas are not at all advantageous for energy suppliers. They require a lot of energy over short periods of time, especially between 5pm and 7pm. This requires generators to be used to ensure the same quality of service during these peak hours. And having to turn them on and off represents costs. The ideal solution would be to even out the use of energy, making it easier to optimize the service provided.  This is how the smart grid dovetails with smart city issues.

 

The smart grid is also presented as a solution to environmental problems. How are the two related?

HA: There is something very important we must understand: energy is difficult to store. This is one of the limits we face in the deployment of renewable energies, since solar, wind and marine energy sometimes produce electricity at times when we don’t need it. However, a network that can intelligently manage the energy production and distribution is beneficial for renewable energies. For example, electric car batteries can be used to store the energy produced by renewable sources. During peaks in consumption, users can choose to disconnect from the conventional network and use the energy stored by their car in the garage and receive financial compensation from their supplier. This is only possible with an intelligent network that can adapt the offer in real time based on large amounts of data on production and consumption.

 

How important is data in the deployment of smart grids?

HA: It is one of the most important aspects, of course. All of the network’s intelligence relies on data; it is what feeds the machine learning algorithms. This aspect alone requires support provided by research projects. We have submitted one proposal to the Saclay federation of municipalities, for example. We propose to establish data banks to collect data on production and consumption in that area. Open data is an important aspect of smart grid development.

 

What are the barriers to smart grid deployment?

HA: One of the biggest barriers is that of standardization. The smart grid concept came from the United States, where the objective is entirely different. The main concern there is to interconnect state networks, which up until now were independent, in order to prevent black-outs. In Europe, we drew on this concept to complement the deployment of renewable energies and energy savings. However, we also need to interconnect with other European states. And unlike the United States, we do not have the same network standards as our German and Italian neighbors. This means we have a lot of work to do at a European level to define common data formats and protocols. We are contributing to this work through our SEAS project led by EDF.

Also read on I’MTech:

[one_half]

[/one_half][one_half_last]

[/one_half_last]

fine particles, Véronique Riffault, IMT Lille Douai

What are fine particles?

During peak pollution events, everyone is talking about them. Fine particles are often accused of being toxic. Unfortunately, they do not only come out during episodes of high pollution. Véronique Riffault, a researcher in atmospheric sciences at IMT Lille Douai, revisits the basics of fine particles to better understand what they are all about.

 

What does a fine particle look like?

Véronique Riffault: They are often described as spherical in shape, partly because scientists speak of diameter to describe their size. In reality, they come in a variety of shapes. When they are solid, they can indeed sometimes be spherical, but also cubic, or even made up of aggregates of smaller particles of different shapes. Some small fibers are also fine particles. This is the case with asbestos and nanotubes. Fine particles may also be liquids or semi-liquids. This happens when their chemical nature gives them a soluble character, they then dissolve when they meet droplets of water in the atmosphere.

How are they created?

VR: The sources of fine particles are highly varied, and depend on the location and the season. They may be generated directly by human processes, which are generally linked to combustion activities. This is true of residential heating using wood burning, road traffic, industry, etc. There are also natural sources: sea salt in the oceans or mineral dust in deserts, but these particles are usually bigger. Indirectly, they are also created by condensation of gases or by oxidation when atmospheric reactions make volatile organic compounds heavier. These “secondary” emissions are highly dependent on environmental conditions such as sunshine, temperature, etc.

Why do we hear about different sizes, and where does the term “PM” come from?

VR: Depending on their size, fine particles have different levels of toxicity. The smaller they are, the deeper they penetrate the respiratory system. Above 2.5 microns [1 micron = 1 thousandth of a millimeter], the particles are stopped quite effectively by the nose and throat. Below this, they go into the lungs. The finest particles even get into the pulmonary alveoli and into the bloodstream. In order to categorize them, and to establish resulting regulations, we distinguish fine particles by specific names: PM10, PM2,5, etc. The figure refers to the higher size in micron, and “PM” stands for Particulate Matter.

How can we protect ourselves from fine particles?

VR: One option is to wear a mask, but their effectiveness depends greatly on the way in which they are worn. When badly positioned, they are useless. A mask can give the wearer a sense of security when they wear them during peak pollution events. The risk is that they feel protected, and carry on doing sport, for example. This leads them to hyperventilate, which increases their exposure to fine particles. The simplest measure would be to not produce fine particles in the first place. Measures to reduce traffic can be effective if it is not just a fraction of vehicles which are immobilized. Authorities can take measures to restrict agricultural spreading. Fertilizer produces ammonia which combines with nitrogen oxides to create ammonium nitrates, which are fine particles. People also need to be made aware that they should not burn green waste, such as dead leaves and branches, in their gardens, but to take them to recycling locations, and to reduce their use of wood fire heating during peak pollution events.

Also read on I’MTech Particulate matter pollution peaks: detection and prevention

Are fine particles dangerous outside of peak pollution events?

VR: Even outside of peak pollution events, there are more particles than there should be. The only European regulation on a daily basis is for PM10 particles. For PM2,5 particles, the limit is annual: fewer than 20 micrograms per cubic meter on average. This poses two problems. The World Health Organization (WHO) recommends a threshold of 10 micrograms per cubic meter. This amount is regularly exceeded at several sites in France. The only thing helping us is that we are lucky to have an oceanic climate which brings rain. Precipitation removes the particles from the atmosphere. On average over a year, we remain below the limit, but on a daily basis we could be breathing in dangerous amounts of fine particles.

Also read on I’MTech

[one_half]

[/one_half][one_half_last]

[/one_half_last]

supply chain management, Matthieu Lauras

What is supply chain management?

Behind each part of your car, your phone or even the tomato on your plate, there’s an extensive network of contributors. Every day, billions of products circulate. The management of a logistics chain – or ‘supply chain management’ – organizes these movements on a smaller or larger scale. Matthieu Lauras, a researcher in industrial engineering at IMT Mines Albi, explains what it’s all about and the problems associated with supply chain management as well as their solutions.

 

What is a supply chain?

Matthieu Lauras: A supply chain consists of a network of installations (i.e. factories, shops, warehouses, etc.) and partners ranging from supplier-to-supplier chains, to client-to-client chains. It’s the succession of all these participants that provides added value and allows a finished consumer product or service to be created, and transported to the end of the production line.

For the management of supply chains, we focus on the flux of material and information. The idea is to optimize the overall performance of the network: to be capable of delivering the right product to the right place at the right time with the right standard of quality and cost. I often say to my students that supply chain management is the science of compromise. You have to find a good balance between several restrictions and issues. This is what allows you to have a sustainable level of competition.

 

What difficulties are produced by the supply chain?

ML: The biggest difficulty with supply chains occurs when they are not managed in a centralized way. In the context of a business for example, the CEO is able to be a mediator between two services if there is a problem. However, when dealing with the scale of a supply chain, there are several businesses which have different legal stances, and no one person is able to be the mediator. This means that participants have to get along, collaborate and coordinate.

This isn’t easy to do since one of the characteristics of a supply chain is the absence of total coherence between the local and global optimum. For example, I optimize my production by selling my product in 6-packs, to make things quicker, even though this isn’t necessarily what my customers want to ensure the product’s sale. They may prefer that the product is sold in packs of 10 rather than 6. Therefore, what I gain in producing 6-packs is then lost by the next participant who has to transform my product. This is just one example of the type of problem we try to tackle through research into supply chain management.

 

What does supply chain management research consist in?

ML: Research in this field spans over several levels. There is a lot of information available, the question is how to exploit it. We offer tools which can process this data in order for it to be passed on to people (i.e. production/logistics managers, operations directors, request managers, distribution/transport directors, etc.) that would be in the position to make decisions and lead actions.

An important element is that of uncertainty and variability. The majority of tools used in the supply chain were designed in the 60’s or 70’s. The problem is that they were invented at a time where the economy was relatively stable. A business knew that it would sell a certain volume of a certain product over the 5 years to come. Today, we don’t really know what we’re going to sell in a year. Furthermore, we have no idea about the variations in demand that we will have to deal with, nor the new technological opportunities that may arise in the next six months. We are therefore obliged to question what developments we can bring to the decision-making tools that are currently in use, in order to make them more adapted to this new environment.

In practice, research is based on three main stages: first, we design the mathematical models and the algorithms allowing us to find an optimal solution to a problem or to compare several potential solutions. Then we develop computing systems which are able to implement these. Finally, we conduct experiments with real data sets to assess the impact of innovations and suggested tools (the advantages and disadvantages).

Some tools in supply chain management are methodological, but the majority are computer-based. They generally consist in software such as business management software packages (software containing several universal tools) which can be used on a network scale, or alternatively, APS (‘Advanced Planning and Scheduling Systems’). Four elements are then developed by the intermediary of these tools: planning, collaboration, risk management and delay reduction. Amongst other things, these allow simulations of various scenarios to be carried out in order to optimize the performance of the supply chain.

 

What problems are these tools responding to?

ML: Let’s consider planning tools. In the supply chain for paracetamol, we’re talking about a product which needs to have immediate availability. However, it takes around 9 months between the moment when the first component is supplied and when the product is actually manufactured. This means we have to anticipate potential demand several months in advance. Depending on this, it is possible to predict the supplies of materials necessary for the product to be manufactured, but also the positioning of stock closer to or further from the client.

In terms of collaboration, the objective is to avoid conflicts that could paralyze the chain. This means that the tools facilitate the exchange of information and joint decision-making. Take the example of Carrefour and Danone. The latter sets up a TV advertising campaign for its new yogurt range. If this process isn’t coordinated with the supermarket, making sure that the products are in the shops and that  there is sufficient space to feature them, Danone risks spending lots of money on an advertising campaign without being able to meet the demand it creates.

Another range of tools deals with delay reduction. A supply chain has a strong momentum. The time it takes for a piece of information linked to a change at the end of the chain (a higher demand that expected for example) will have an impact on all participants for anything from a few weeks to several months. It’s a “whiplash effect”. In order to limit this, it is in everyone’s best interest to have smaller chains that are more reactive to changes. Research is therefore looking to reduce waiting times, information transmission time and even transport time between two points.

Finally, today we cannot know exactly what the demand will be in 6 months. This is why we are working on the issue of risk sharing, or “contingency plans” which allow us to limit the negative impact of risks. This can be implemented by calling upon several suppliers for any given component. If I then have a problem with one of these (i.e. a factory fire, liquidation, etc.), I retain my ability to function.

 

Are supply chain management techniques applied to any fields other than that of commercial chains?

ML: Supply chain management is now open to other applications, particularly in the service world, in hospitals and even in banks. The principal aim is to provide a product or service to a client. In the case of a patient waiting for an operation, there is a need for resources once they enter the operating theater. All the necessary staff need to be available, from the stretcher bearer that carries the patient, to the surgeon that operates on them. It’s therefore a question of synchronization of resources and logistics.

Of course there are also restrictions specific to this kind of environment. For example, for humanitarian logistics, the question of customers does not present in the same way as in commercial logistics. Indeed, the person benefitting from a service in a humanitarian supply chain is not the person who pays, as they would be in a commercial domain. However, there is still the need to manage the flow of resources in order to maximize the produced added value.

 

renewable energy storage, stockage énergies renouvelables

What is renewable energy storage?

The storage of green energy is an issue which concerns many sectors, whether for energy transition or for supplying power to connected objects using batteries. Thierry Djenizian, a researcher at Mines Saint-Étienne, explains the main problems to us, focusing in particular on how electrochemical storage systems work.

 

Why is the storage of renewable sources of energy (RSE) important today?

Thierry Djenizian: it has become essential to combat the various kinds of pollution created by burning fossil fuels (emission of nanoparticles, greenhouse gases, etc.) and also to face up to the impending shortage over the next decades. Our only alternative is to use other natural, inexhaustible energy sources (i.e. hydraulic, solar, wind, geothermal and biomass). These sources allow us to convert solar, thermal or chemical energy into electrical and mechanical energy.

However, the energy efficiency of RSEs is considerably inferior to that of fossil fuels. Also, in the cases of solar and wind power, the source is “intermittent” and therefore varies over time. Their implementation requires complex and costly installation processes.

 

How is renewable energy stored?

TD: in a general sense, energy produced by RSE drives the production of electricity which can be stored in systems that are either mechanical (hydraulic dams), electromagnetic (superconducting coils), thermal (latent or sensitive heat) or electrochemical (chemical reactions generating electron exchange).

Electrochemical storage systems are made up of three elements: two electrodes (electronic conductors) separated by an electrolyte (an ion conductive material in the form of a liquid, gel, or ceramic, etc.). Electron transfer occurs on the surface of the electrodes (at the anode in the case of electron loss and at the cathode in the opposite case) and they circulate within the circuit in the opposite direction to that of the ions. There are three main categories of electrochemical storage systems: accumulators or batteries, supercapacitors and fuel cells. For RSEs, the changes produced are ideally stored in accumulators for energy performance and cost reasons.

 

How do electrochemical storage systems work?

TD: Let’s take the example of accumulators (rechargeable batteries). The size of these varies according to the quantity of energy required by the device in question, ranging from a button battery for a watch, through to a car engine. Dimensions aside, accumulators function by using reversible electrochemical reactions.

Let’s consider the example of a discharged lithium-ion accumulator. One of the two electrodes is made out of lithium. When you charge the battery, it receives negative charge (electrons), or in other words, electricity produced by the RSE, since the provision of electrons triggers a chemical reaction, releasing the lithium from the electrode in the form of ions. The ions then migrate through the electrolyte to insert themselves into the second electrode. When all the sites that can accommodate lithium on the second electrode are occupied, the battery is fully charged.

As the battery is discharged, reverse chemical reactions spontaneously occur, re-releasing the lithium ions which make their way back to the starting point. This allows for a current to be recovered which corresponds to the movement of previously stored charges.

 

What difficulties are associated with electrochemical storage systems?

TD: Every approach has its own set of advantages and disadvantages. For example, fuel cells present the issue of high costs due to the use of platinum which speeds up the chemical reactions. Additionally, hydrogen (the fuel within the cell) produces many restrictions in terms of production and security. Hydrogen is also hard to obtain in large quantities through a source other than hydrocarbon compounds (fossils) and is also an explosive, meaning there are restrictions in terms of storage.

Supercapacitors are the preferred system for devices requiring a strong power supply over a short period of time. In basic terms, they allow a small amount of charge to be stored but can redistribute this very quickly. They can be found in the systems that power the opening of airplane doors for example, as these need a powerful energy supply for a short period of time. They are also used in hybrid car engines.

Conversely, the accumulators we talked about before allow a large number of charges to be stored but their release is slow. They are very energy efficient but not very powerful. In some ways, the two options are complementary.

Let’s compare these measures to another system in which electrons are represented as a liquid. Supercapacitors are like a glass of water. Accumulators would then be comparable to a jug, in that they offer a much larger water (or charges) storage capacity than the glass. However, the jug has a narrow neck, preventing liquid from being poured quickly. The ideal would be to have a jug which could release its contents and then be refilled easily like the glass of water. This is precisely the subject of current research which is based on finding systems able to obtain large densities of energy and power.

 

Which systems are best suited for the use of renewable energy as a power source?

TD: The field of potential applications is extremely vast as it encompasses all the growing energy needs that we need to satisfy. It includes everything from large installations (smart grid) to providing power for portable microelectronic devices (connected objects), and even transport (electric vehicles). In the latter, the battery directly influences the performance of the environmentally-friendly automobiles.

Today, lithium-ion batteries can considerably improve the technical characteristics of electric vehicles, making their usage possible. However, the energy efficiency of these accumulators is still 50 times lower than that of hydrocarbons. In order to produce batteries that are able to offer a credible electric car to the market, the main thing that needs to be done is to increase the energy storage capacity in the batteries. Indeed, getting the maximum amount of energy possible from the smallest possible volume is the challenge faced by all transport. The electric car is no exception.

Also read on I’MTech :

What is net neutrality?

Net neutrality is a legislative shield for preventing digital discrimination. Regularly defended in the media, it ensures equal access to the internet for both citizens and companies. The topic features prominently in a report on the state of the internet published on May 30 by Arcep (the French telecommunications and postal regulatory body). Marc Bourreau, a researcher at Télécom ParisTech, outlines the basics of net neutrality. He explains what it encompasses, the major threats it is facing, and the underlying economic issues.

 

How is net neutrality defined?

Marc Bourreau: Net neutrality refers to the rules requiring internet service providers (ISPs)to treat all data packets in the same manner. It aims to prevent discrimination by the network in terms of content, service, the application or identity of the source of traffic. By content, I mean all the packets included in the IP. This includes the internet, along with social media, news sites, streaming or games platforms, as well as other services such as e-mail.

If it is not respected, in what way may discrimination take place?

MB: An iconic example of discrimination occurred in the United States. An operator was offering a VoIP service similar to Skype. It had decided to block all competing services, including Skype in particular. In concrete terms, this means that customers could not make calls with an application other than the one offered by their operator. To give an equivalent, this would be like a French operator with an on-demand video service, such as Orange, deciding to block Netflix for its customers.

Net neutrality is often discussed in the news. Why is that?

MB: It is the subject of a long legal and political battle in the United States. In 2005, the American Federal Communications Commission (FCC), the regulator for telecoms, the internet and the media, determined that internet access no longer fell within the field of telecommunications from a legal standpoint. The FCC determined that it belonged rather to information services, which led to major consequences. In the name of freedom of information, the FCC had little power to regulate, therefore giving the operators greater flexibility. In 2015, the Obama administration decided to place internet access in the telecommunications category once again. The regulatory power of the FCC was therefore restored and it established three major rules in February 2015.

What are these rules?

MB: An ISP cannot block traffic except in the case of objective traffic management reasons — such as ensuring network security. An ISP cannot degrade a competitor’s service. And an ISP cannot offer pay-to-use fast lanes to access better traffic. In other words, they cannot create an internet highway. The new Trump administration has put net neutrality back in the public eye with the president’s announcement that he intends to roll back these rules. The new director of the FCC, Ajit Pai, appointed by Trump in January, has announced that he intends to reclassify internet service as belonging to the information services category.

Is net neutrality debated to the same extent in Europe?

MB: The European regulation for an open internet, adopted in November 2015, has been in force since April 30, 2016. This regulation establishes the same three rules, with a minor difference in the third rule focusing on internet highways. It uses stricter wording, thereby prohibiting any sort of discrimination. In other words, even if an operator wanted to set up a free internet highway, it could not do so.

Could the threats to net neutrality in the United States have repercussions in Europe?

MB: Not from a legislative point of view. But there could be some indirect consequences. Let’s take a hypothetical case: if American operators introduced internet highways, or charged customers for services to benefit from fast lanes, the major platforms could subscribe to these services. If Netflix had to pay for better access to networks within the United States territory, it could also raise its prices for its subscriptions to offset this cost. And that could indirectly affect European consumers.

The issue of net neutrality seems to more widely debated in the United States than in Europe. How can this be explained? 

MB: Here in Europe, net neutrality is less of a problem than it is in the United States and it is often said that it is because there is greater competition in the internet access market in Europe. I have worked on this topic with a colleague, Romain Lestage. We analyzed the impact of competition on telecoms operators’ temptation to charge content producers. The findings revealed that as market competition increases, operators obviously earn less from consumers and are therefore more likely to make attempts to charge content producers. The greater the competition, the stronger the temptation to deviate from net neutrality.

Do certain digital technologies pose a threat to net neutrality in Europe? 

MB: 5G will raise questions about the relevance of the European legislation, especially in terms of net neutrality. It was designed as technology which could provide services with very different degrees of quality. Some will be very sensitive to server response time, and others to speed. Between communications for connected devices and ultra-HD streaming, the needs are very different. This calls for creating different qualities of network service, which is, in theory, contradictory to net neutrality. Telecoms operators in Europe are using this as an argument for reviewing the regulation, in addition to ensuring that this will lead to increased investments in the sector.

Does net neutrality block investments?

MB: We studied this question with colleagues from the Innovation and Regulation of Digital Services Chair. Our research showed that without net neutrality regulation, fast lanes — internet highways— would lead to an increase in revenue for operators, which they would reinvest in network traffic management to improve service quality. Content providers who subscribe to fast lanes would benefit by offering users higher-quality content. However, these studies also showed that deregulation would lead to the degradation of free traffic lanes, to incite content providers to subscribe to the pay-to-use lanes. Net neutrality legislation is therefore crucial to limiting discrimination against content providers, and consequently, against consumers as well.

 

digital labor

What is Digital Labor?

Are we all working for digital platforms? This is the question posed by a new field of research: digital labor. Web companies use personal data to create considerable value —but what do we get in return? Antonio Casilli, a researcher at Télécom ParisTech and a specialist in digital labor, will give a conference on this topic on June 10 at Futur en Seine in Paris. In the following interview he outlines the reasons for the unbalanced relationship between platforms and users and explains its consequences.

 

What’s hiding behind the term “digital labor?”

Antonio Casilli: First of all, digital labor is a relatively new field of research. It appeared in the late 2000s and explores new ways of creating value on digital platforms. It focuses on the emergence of a new way of working, which is “taskified” and “datafied.” We must define these words in order to understand them better. “Datafied,” because it involves producing data so that digital platforms can derive value. “Taskified,” because in order to produce data effectively, human activity must be standardized and reduced to its smallest unit. This leads to the fragmentation of complex knowledge as well as to the risks of deskilling and the breakdown of traditional jobs.

 

And who exactly performs this work in question?

AC: Micro-workers who are recruited via digital platforms. They are unskilled laborers, the click workers behind the API. But, since this is becoming a widespread practice, we could say anyone who works performs digital labor. And even anyone who is a consumer. Seen from this perspective, anyone who has a Facebook, Twitter, Amazon or YouTube account is a click worker. You and I produce content —videos, photos, comments —and the platforms are interested in the metadata hiding behind this content. Facebook isn’t interested in the content of the photos you take, for example. Instead, it is interested in where and when the photo was taken, what brand of camera was used. And you produce this data in a taskified manner since all it requires is clicking on a digital interface. This is a form of unpaid digital labor since you do not receive any financial compensation for your work. But it is work nonetheless: it is a source of value which is tracked, measured, evaluated and contractually defined by the terms of use of the platforms.

 

Is there digital labor which is not done for free?

AC: Yes, that is the other category included in digital labor: micro-paid work. People who are paid to click on interfaces all day long and perform very simple tasks. These crowdworkers are mainly located in India, the Philippines, or in developing countries where average wages are low. They receive a handful of cents for each click.

 

How do platforms benefit from this labor?

AC: It helps them make their algorithms perform better. Amazon, for example, has a micro-work service called Amazon Mechanical Turk, which is almost certainly the best-known micro-work platform in the world. Their algorithms for recommending purchases, for example, need to practice on large, high-quality databases in order to be effective. Crowdworkers are paid to sort, annotate and label images of products proposed by Amazon. They also extract textual information for customers, translate comments to improve additional purchase recommendations in other languages, write product descriptions etc.

I’ve cited Amazon but it is not the only example.  All the digital giants have micro-work services. Microsoft uses UHRS, Google has its EWOQ service etc. IBM’s artificial intelligence, Watson, which has been presented as one of its greatest successes in this field, relies on MightyAI. This company pays micro-workers to train Watson, and its motto is “Train data as a service.” Micro-workers help train visual recognition algorithms by indicating elements in images, such as the sky, clouds, mountains etc. This is a very widespread practice. We must not forget that behind all artificial intelligence, there are, first and foremost, human beings. And these human beings are, above all, workers whose rights and working conditions must be respected.

Workers are paid a few cents for tasks proposed on Amazon Mechanical Turk, which includes such repetitive tasks as “answer a questionnaire about a film script.”

 

This form of digital labor is a little different from the kind I carry out because it involves more technical tasks.   

AC:  No, quite the contrary. They perform simple tasks that do not require expert knowledge. Let’s be clear: work carried out by micro-workers and crowds of anonymous users via platforms is not the ‘noble’ work of IT experts, engineers, and hackers. Rather, this labor puts downward pressure on wages and working conditions for this portion of the workforce. The risk for digital engineers today is not being replaced by robots, but rather having their jobs outsourced to Kenya or Nigeria where they will be done by code micro-workers recruited by new companies like Andela, a start-up backed by Mark Zuckerberg. It must be understood that micro-work does not rely on a complex set of knowledge. Instead it can be described as: write a line, transcribe a word, enter a number, label an image. And above all, keep clicking away.

 

Can I detect the influence of these clicks as a user?

 AC: Crowdworkers hired by genuine “click farms” can also be mobilized to watch videos, make comments or “like” something. This is often what happens during big advertising or political campaigns. Companies or parties have a budget and they delegate the digital campaign to a company, which in turn outsources it to a service provider. And the end result is two people in an office somewhere, stuck with the unattainable goal of getting one million users to engage with a tweet. Because this is impossible, they use their budget to pay crowdworkers to generate fake clicks. This is also how fake news spreads to such a great extent, backed by ill-intentioned firms who pay a fake audience. Incidentally, this is the focus of the Profane research project I am leading at Télelécom ParisTech with Benjamin Loveluck and other French and Belgian colleagues.

 

But don’t the platforms fight against these kinds of practices?

AC: Not only do they not fight against these practices, but they have incorporated them in their business models. Social media messages with a large number of likes or comments make other users more likely to interact and generate organic traffic, thereby consolidating the platform’s user base. On top of that, platforms also make use of these practices through subcontractor chains. When you carry out a sponsored campaign on Facebook or Twitter, you can define your target as clearly as you like, but you will always end up with clicks generated by micro-workers.

 

But if these crowdworkers are paid to like posts or make comments, doesn’t that raise questions about tasks carried out by traditional users?

AC: That is the crux of the issue. From the platform’s perspective, there is no difference between me and a click-worker paid by the micro-task. Both of our likes have the same financial significance. This is why we use the term digital labor to describe these two different scenarios. And it’s also the reason why Facebook is facing a class-action lawsuit filed with the Court of Justice of the European Union representing 25,000 users. They demand €500 per person for all the data they have produced. Google has also faced a claim for its Recaptcha, from users who sought to be re-classified as employees of the Mountain View firm. Recaptcha was a service which required users to confirm that they were not robots by identifying difficult-to-read words. The data collected was used to improve Google Books’ text recognition algorithms in order to digitize books. The claim was not successful, but it raised public awareness of the notion of digital labor. And most importantly, it was a wake-up call for Google, who quickly abandoned the Recaptcha system.

 

Could traditional users be paid for the data they provide?

AC: Since both micro-workers, who are paid a few cents for every click, and ordinary users perform the same sort of productive activity, this is a legitimate question to ask.  On June 1, Microsoft decided to reward Bing users with vouchers in order to convince them to use their search engine instead of Google. It is possible for a platform to have a negative price, meaning that it pays users to use the platform. The question is how to determine at what point this sort of practice is akin to a wage, and if the wage approach is both the best solution from a political viewpoint and the most socially viable. This is where we get into the classic questions posed by the sociology of labor. They can also relate to Uber drivers, who make a living from the application and whose data is used to train driverless cars. Intermediary bodies and public authorities have an important role to play in this context. There are initiatives, such as one led by the IG Metal union in Germany, which strive to gain recognition for micro-work and establish collective negotiations to assert the rights of clickworkers, and more generally, all platform workers.

 

On a broader level, we could ask what a digital platform really is.

AC: In my opinion, it would be better if we acknowledged the contractual nature of the relationship between a platform and its users. The general terms of use should be renamed “Contracts to use data and tasks provided by humans for commercial purposes,” if the aim is commercial. Because this is what all platforms have in common: extracting value from data and deciding who has the right to use it.

 

 

Big Data, TeraLab, Anne-Sophie Taillandier

What is Big Data?

On the occasion of the Big Data Congress in Paris, which was held on 6 and 7 March at the Palais des Congrès, Anne-Sophie Taillandier, director of TeraLab, examines this digital concept which plays a leading role in research and industry.

 

Big Data is a key element in the history of data storage. It has driven an industrial revolution and is a concept inherent to 21st century research. The term first appeared in 1997, and initially described the problem of an amount of data that was too big to be processed by computer systems. These systems have greatly progressed since, and have transformed the problem into an opportunity. We talked with Anne-Sophie Taillandier, director of the Big Data platform TeraLab about what Big Data means today.

 

What is the definition of Big Data?

Anne-Sophie Taillandier: Big Data… it’s a big question. Our society, companies, and institutions have produced an enormous amount of data over the last few years. This growth has been favored by the growing number of sources (sensors, web, after-sales service, etc.). What is more, the capacities of computers have increased tenfold. We are now able to process these large volumes of data.

These data are very varied, they may be text, measurements, images, videos, or sound. They are multimodal, that is, able to be combined in several ways. They contain rich information and are worth using to optimize existing products and/or services, or to invent new approaches. In any case, it is not the quantity of the quantity of the data that is important. However, Big Data enables us to cross-reference this information with open data, and can therefore provide us with relevant information. Finally, I prefer to speak of data innovation rather than Big Data – it is more appropriate.

 

Who are the main actors and beneficiaries of Big Data?

AST: Everyone is an actor, and everyone can benefit from Big Data. All industry sectors (mobility, transport, energy, geospatial data, insurance, etc.) but also the health sector. Citizens are especially concerned by the health sector. Research is a key factor in Big Data and an essential partner to industry. The capacities of machines now allow us to establish new algorithms for processing big quantities of data. The algorithms are progressing quickly, and we are constantly pushing the boundaries.

Data security and governance are also very important. Connected objects, for example, accumulate user data. This raises the question of securing this information. Where do the data go? But also, what am I allowed to use them for? Depending on the case, anonymization might be appropriate. These are the types of questions facing the Big Data stakeholders.

 

How can society and companies use Big Data?

AST: Innovation in data helps us to develop new products and services, and to optimize already existing ones. Take the example of the automobile. Vehicles generate data allowing us to optimize maintenance. The data accumulated from several vehicles can also be useful in manufacturing the next vehicle, they can assist in the design process. These same data may also enable us to offer new services to passengers, professionals, suppliers, etc. Another important field is health. E-health promotes better healthcare follow-up and may also improve practices, making them better-adapted to the patient.

 

What technology is used to process Big Data?

AST: The technology allowing us to process data is highly varied. There are algorithmic systems, such as Machine Learning and Deep Learning. There is also artificial intelligence. Then, there are also the frameworks of open source software, or paid solutions. It is a very broad field. With Big Data, companies can open up their data in an aggregated form to develop new services. Finally, technology is advancing very quickly, and is constantly influencing companies’ strategic decisions.