Electroencephalogram: a brain imaging technique that is efficient but limited in terms of spatial resolution.

Technology that decrypts the way our brain works

Different techniques are used to study of the functioning of our brain, including electroencephalography, magnetoencephalography, functional MRI and spectroscopy. The signals are processed and interpreted to analyze the cognitive processes in question. EEG and MRI are the two most commonly used techniques in cognitive science. Their performances offer hope and but also concern. What is the current state of affairs of brain function analysis and what are its limits?

 

Nesma Houmani is a specialist in electroencephalography (EEG) signal analysis and processing at Télécom SudParis. Neuron activity in the brain generates electrical changes which can be detected on the scalp. These are recorded using a cap fitted with strategically-placed electrodes. The advantages of EEG are that it is not costly, easily accessible and noninvasive for the subjects being studied. However, it generates a complex signal composed of oscillations associated with new baseline brain activity when the patient is awake and at rest, punctual signals linked to activations generated by the test and variable background noise caused, notably, by involuntary movements by the subject.

The level of noise depends, among other things, on the type of electrodes used, whether dry or with gel. While the latter reduces the detection of signals not emitted by brain activity, they take longer to place, may cause allergic reactions and require the patient to thoroughly wash with shampoo after the examination, making it more complicated to carry out these tests outside hospitals. Dry electrodes are being introduced in hospitals, but the signals recorded have a high level of noise.

The researcher at Télécom SudParis uses machine learning and artificial intelligence algorithms to extract EEG markers. “I use information theory combined with statistical learning methods to process EEG time series of a few milliseconds.” Information theory supposes that signals with higher entropy contain more information. In other words, when the probability of an event occurring is low, the signal contains more information and is therefore more likely to be relevant. Nesma Houmani’s work allows the removal of parasite signals from the trace and a more accurate interpretation of the EEG data recorded.

A study published in 2015 showed that this technique allowed better definition of the EEG signal in the detection of Alzheimer’s disease. Statistical modeling allows consideration of the interaction between the different areas of the brain over time. As part of her research on visual attention, Nesma Houmani uses EEG combined with an eye tracking device to determine how a subject engages in and withdraws from a task: “The participants must observe images on a screen and carry out different actions according to the image shown. A camera is used to identify the point of gaze, allowing us to reconstitute eye movements,” she explains. Other teams use EEG for emotional state discrimination or for understanding decision-making mechanisms.

EEG provides useful data because it has a temporal resolution of a few milliseconds. It is often used in applications for brain-machine interfaces, allowing a person’s brain activity to be observed in real time with just a few seconds’ delay. “However, EEG is limited in terms of spatial resolution,” explains Nesma Houmani. This is because the electrodes are, in a sense, placed on the scalp in two dimensions, whereas the folds in the cortex are three-dimensional and activity may come from areas that are further below the surface. In addition, each electrode measures the sum of synchronous activity for a group of neurons.

The most popular tool of the moment: fMRI

Conversely, functional MRI (fMRI) has excellent spatial resolution but poor temporal resolution. It has been used a lot in recent scientific studies but is costly and access is limited by the number of devices available. Moreover, the level of noise it produces when in operation and the subject’s position lying down in a tube can be stressful for participants. Brain activity is reconstituted in real time by detecting a magnetic signal linked to the amount of blood transferred by micro-vessels at a given moment, which is visualized over 3D anatomical planes. Although activations can be accurately situated, hemodynamic variations occur a few seconds after the stimulus, which explains why the temporal resolution is lower than that of EEG.

fMRI produces section images of the brain with good spatial resolution but poor temporal resolution.

fMRI produces section images of the brain with good spatial resolution but poor temporal resolution.

 

Nicolas Farrugia has carried out several studies with fMRI and music. He is currently working on applications for machine learning and artificial intelligence in neuroscience at IMT Atlantique. “Two main paradigms are being studied in neuroscience: coding and decoding. The first aims to predict brain activity triggered by a stimulus, while the second aims to identify the stimulus from the activity,” the researcher explains. A study published in 2017 showed the possibilities of fMRI associated with artificial intelligence in decoding. Researchers asked subjects to watch videos in an MRI scanner for several hours. A model was then developed using machine learning, which was able to reconstruct a low-definition image of what the participant saw based on the signals recorded in their visual cortex. fMRI is a particularly interesting technique for studying cognitive mechanisms, and many researchers consider it the key to understanding the human brain, but it nevertheless has its limits.

Reproducibility problems

Research protocol changed recently. Nicolas Farrugia explains: “The majority of publications in cognitive neuroscience use simple statistical models based on functional MRI contrasts by subtracting the activations recorded in the brain for two experimental conditions A and B, such as reading versus rest.” But several problems have led researchers to modify this approach. “Neuroscience is facing a major reproducibility challenge,” admits Nicolas Farrugia. Different limitations have been identified in publications, such as a small workforce, a high level of noise and a separate analysis for each part of the brain, not to mention any interactions or the relative intensity of activation in each area.

These reproducibility problems are leading researchers to change methods, from an inference technique in which all available data is used to obtain a model that cannot be generalized, to a prediction technique in which the model learns from part of the data and is then tested on the rest.” This approach, which is the basis for machine learning, allows the model’s relevance to be checked in comparison with the actual reality. “Thanks to artificial intelligence, we are seeing the development of computational calculation methods which were not possible with standard statistics. In time, this will allow researchers to predict what type of image or what piece of music the person is thinking of based on their brain activity.

Unfortunately, there are also reproducibility problems in signal processing with machine learning. The technique, which is based on artificial neural networks, is currently the most popular because it is very effective in multiple applications, but it requires adjusting hundreds of thousands of parameters using optimization methods. Researchers tend to adjust the parameters of the developed model when they evaluate it and repeat it on the same data, thus distorting the generalization of results. The use of machine learning also leads to another problem for signal detection and analysis: the ability to interpret the results. Knowledge of deep learning mechanisms is currently very limited and is a field of research in its own right, so our understanding of how human neurons function could in fact come from our understanding of how deep artificial neurons function. A strange sort of mise en abyme!

 

Article written by Sarah Balfagon, for I’MTech.

 

More on this topic:

The “Miharu Takizakura”, a weeping cherry tree over a thousand years old. The tree is on a soil contaminated by the Fukushima incident.

The cherry trees of Fukushima

Written by Franck Guarnieri, Aurélien Portelli, et Sébastien Travadel, Mines ParisTech.
The original version has been published on The Conversation.

[divider style=”normal” top=”20″ bottom=”20″]

[dropcap]I[/dropcap]t’s 2019 and for many, the Fukushima Daiichi nuclear disaster has become a distant memory. In the West, the event is considered to be over. Safety standards have been audited and concerns about the sector’s security and viability officially addressed.

Still, the date remains traumatic. It reminds us of our fragility in the face of overpowering natural forces, and those that we unleashed ourselves. A vast area of Japan was contaminated, tens of thousands of people were forced from their homes, businesses closed. The country’s nuclear power plants were temporarily shut down and fossil fuel consumption rose sharply to compensate.

There’s also much work that remains to be done. Dismantling and decontamination will take several decades and many unprecedented challenges remain. Reactors are still being cooled, spent fuel must be removed, radioactive water has to be treated. Radioactivity measurements on site cannot be ignored, and are a cause for concern for the more than 6,000 people working there. Still, the risk of radiation may be secondary given the ongoing risk of earthquakes and tsunamis.

Surprisingly, for many Japanese the disaster has a different connotation – it’s seen as having launched a renaissance.

Rebuilding the world

Our research team became aware of the need to re-evaluate our ideas about the Fukushima Daiichi nuclear disaster during a visit in March 2017. As we toured the site, we discovered a hive of activity – a place where a new relationship between man, nature and technology is being built. The environment is completely artificial: all vegetation has been eradicated and the neighbouring hills are covered by concrete. Yet, at the heart of this otherworldly landscape there are cherry trees – and they were in full bloom. Curious, we asked the engineer who accompanied us why they were still there. His answer was intriguing. The trees will not be removed, even though they block access routes for decontamination equipment.

Cherry trees have many meanings for the Japanese. Since ancient times, they have been associated with 気 (ki, or life force) and, in some sense, reflect the Japanese idea of time as never-ending reinvention. A striking example is that in the West we exhibit objects in museums, permanently excluding them from everyday life. This is happen in Japan. For example, the treasures making up the Shōsō-in collection are only exhibited once each year and the purpose is not to represent the past, but to show that these objects are still in the present. Another illustration is the Museum Meiji-mura, where more than 60 buildings from the Meiji era (1868-1912) have been relocated. The idea of ongoing reinvention is manifested in buildings that have retained their original function: visitors can send a postcard from a 1909 post office or ride on an 1897 steam locomotive.

We can better understand the relationship between this perception of time and events at Fukushima Daiichi by revisiting the origins of the country’s nuclear-power industry. The first facility was a British Magnox reactor, operational from 1966 to 1998. In 1971, construction of the country’s first boiling-water reactor was supervised by the US firm General Electric. These examples illustrate that for Japan, nuclear power is a technology that comes from elsewhere.

When the Tōhoku earthquake and tsunami struck in 2011, Japan’s inability to cope with the unfolding events at the Fukushima Daiichi nuclear power plant stunned the world. Eight years having passed, for the Japanese it is the nation’s soul that must be resuscitated. Not by the rehabilitation of a defective “foreign” object, but by the creation of new, “made in Japan” technologies. Such symbolism not only refers to the work being carried out by the plant’s operator, the Tokyo Electric Power Company (TEPCO), but also reflects Japanese society.

A photo selected for the NHK Fukushima cherry-tree competition. NHK/MCJP

The miraculous cherry tree

Since 2012, the Japanese public media organisation NHK has organised the “Fukushima cherry tree” photo competition to symbolise national reconstruction. Yumiko Nishimoto’s “Sakura Project” has the same ambition. Before the Fukushima disaster, Nishimoto lived in the nearby town of Naraha. Her family was evacuated and she was only able to return in 2013. Once home, she launched a national appeal for donations to plant 20,000 cherry trees along the prefecture’s 200-kilometre coastline. The aim of the 10-year project is simply to restore hope among the population and the “determination to create a community”, Nishimoto has said. The idea captured the country’s imagination and approximately a thousand volunteers turned up to plant the first trees.

More recently, the “Miharu Takizakura” cherry tree has made headlines. More than 1,000 years old and growing in land contaminated by the accident, its presence is seen as a miracle and it attracts tens of thousands of visitors. The same sentiment is embodied in the Olympic torch relay, which will start from Fukushima on March 26, 2020, for a 121-day trip around Japanese prefectures during the cherry-blossom season.

Fukushima, the flipside of Chernobyl?

This distinctly Japanese perception of Fukushima contrasts with its interpretation by the West, and suggests that we re-examine the links between Fukushima and Chernobyl. Many saw the Fukushima disaster as Chernobyl’s twin – another example of the radioactive “evil”, a product of the industrial hubris that had dug the grave of the Soviet Union.

In 1986 the fatally damaged Chernobyl reactor was encased in a sarcophagus and the surrounding area declared a no-go zone. Intended as a temporary structure, in 2017 the sarcophagus was in turn covered by the “New Safe Confinement”, a monumental structure designed to keep the site safe for 100 years. This coffin in a desert continues to terrify a population that is regularly told that it marks the dawn of a new era for safety.

Two IAEA agents examine work Unit 4 of the Fukushima Daiichi Nuclear Power Station (April 17, 2013).Greg Webb/IAEA, CC BY

Two IAEA agents examine work Unit 4 of the Fukushima Daiichi Nuclear Power Station (April 17, 2013). Greg Webb/IAEA, CC BY

At the institutional level, the International Atomic Energy Agency (IAEA) responded to Chernobyl with the concept of “safety culture”. The idea was to resolve, once and for all, the issue of nuclear power plant safety. Here, the Fukushima accident had little impact: Infrastructure was damaged, the lessons learned were incorporated into safety standards, and resolutions were adopted to bring closure. In the end, the disaster was unremarkable – no more than a detour from standard procedures that had been established following Chernobyl. For the IAEA, the case is closed. The same applies to the nuclear sector as a whole, where business has resumed more or less as usual.

To some extent, Japan has fallen in line with these ideas. The country is improving compliance with international regulations and increased its contribution to the IAEA’s work on earthquake response. But this Western idea of linear time is at odds with the country’s own understanding of the disaster’s framework. For many Japanese, events are still unfolding.

While Chernobyl accelerated the collapse of the Soviet Union, Fukushima Daiichi has become a showcase for the Japanese government. The idea of ongoing reinvention extends to the entire region, through a policy of repopulation. Although highly controversial, this approach stands in stark contrast to Chernobyl, which remains isolated and abandoned.

Other differences are seen in the reasons given for the causes of the accident: The IAEA concluded that the event was due to a lack of safety culture – in other words, organisational failings led to a series of unavoidable effects that could have been predicted – while Japanese scientists either drew an analogy with events that occurred during the Second World War, or attributed the accident to the characteristics of the Japanese people.

Before one dismisses such conclusions as irrational, it’s essential to think again about the meaning of the Fukushima disaster.

The Conversation

IRON-MEN: augmented reality for operators in the Industry of the Future

I’MTech is dedicating a series of success stories to research partnerships supported by the Télécom & Société Numérique (TSN) Carnot Institute, which the IMT schools are a part of.

[divider style=”normal” top=”20″ bottom=”20″]

The Industry of the Future cannot happen without humans at the heart of production systems. To help operators adapt to the fast development of industrial processes and client demands, elm.leblanc, IMT and Adecam Industries have joined forces in the framework of the IRON-MEN project. The aim is to develop an augmented reality solution for human operators in industry.

 

Many production sites use manual processes. Humans are capable of a level of intelligence and flexibility that is still unattainable by industrial robots, an ability that remains essential for the French industrial fabric to satisfy increasingly specific, demanding and unpredictable customer and user demands.

Despite alarmist warnings about replacement by technology, humans must remain central to industrial processes for the time being. To enhance the ability of human operators, IMT, elm.leblanc and Adecam Industries have joined forces in the framework of the IRON-MEN project. The consortium will develop an augmented reality solution for production operators over a period of 3 years.

The augmented reality technology will be designed to help companies develop flexibility, efficiency and quality in production, as well as strengthen communication among teams and collaborative work. The solution developed by the IRON-MEN project will support users by guiding and assisting them in their daily tasks to allow them to increase their versatility and ability to adapt.

The success of such an intrusive piece of technology as an augmented reality headset depends on the user’s physical and psychological ability to accept it. This is a challenge that lies at the very heart of the IRON-MEN project, and will guide the development of the technology.

The aim of the solution is to propose an industrial and job-specific response that meets specific needs to efficiently assist users as they carry out manual tasks. It is based on an original approach that combines digital transformation tools and respect for the individual in production plants. It must be quickly adaptable to problems in different sectors that show similar requirements.

IMT will contribute its research capacity to support elm.leblanc in introducing this augmented reality technology within its industrial organization. Immersion, which specializes in augmented reality experiences, will develop the interactive software interface to be used by the operators. The solution’s level of adaptability in an industrial environment will be tested at the elm.leblanc production sites at Drancy and Saint-Thégonnec as well as through the partnership with Adecam Industrie. IRON-MEN is supported by the French General Directorate for Enterprises in the framework of the “Grands défis du numérique” projects.

When organizations respond to cyberattacks

Cyberattacks are a growing reality that organizations have to face up to. In the framework of the German-French Academy for the Industry of the Future, researchers at IMT and Technische Universität München (TUM) show that there are solutions to this virtual threat. In particular, the ASSET project is studying responses to attacks that target communication between smart objects and affect the integrity of computer systems. Frédéric Cuppens, a researcher in cybersecurity on this project at IMT Atlantique and coordinator of the Cybersecurity of critical infrastructures chair, explains the state-of-the-art defenses to respond to these attacks.

 

Cybersecurity is an increasingly pressing subject for a number of organizations. Are all organizations concerned?

Fréderic Cuppens: The number of smart objects is growing exponentially, including in different organizations. Hospitals, industrial systems, services and transport networks are examples of places where the Internet of Things plays a major role and which are becoming increasingly vulnerable in terms of cybersecurity. We have already seen attacks on smart cars, pacemakers, smart meters etc. All organizations are concerned. To take the case of industry alone, since it is one of our fields of interest at IMT Atlantique, these new vulnerabilities affect production chains and water treatment just as much as agricultural processes and power generation.

What attacks are most often carried out against this type of target?

FC: We have classified the attacks carried out against organizations in order to study the threats. There are lots of attacks on the integrity of computer systems, affecting their ability to function correctly. This is what happens when, for example, an attacker takes control of a temperature sensor to make it show an incorrect value, leading to an emergency shutdown. Then there are also lots of attacks against the availability of systems, which consist in preventing access to services or data exchange. This is the case when an attacker interferes with communication between smart objects.

Are there responses to these two types of attack?

FC: Yes, we are working on measures to put in place against these types of attack. Before going into detail, we need to understand that cybersecurity is composed of three aspects: protection, which consists for example in filtering communication or controlling access to prevent attack; defense, which detects when an attack is being made and provides a response to stop it; and lastly resilience which allows systems to continue operating even during an attack. The research we are carrying out against attacks targeting availability or integrity include all three components, with special focus on resilience.

Confronted with attacks against the availability of systems, how do you guarantee this resilience?

FC: To interfere with communication, all you need is a jamming device. They are prohibited in France, but it is not hard to get hold on one on the internet. A jammer interferes with communication on certain frequencies only, depending on the type of jamming device used. Some are associated with Bluetooth frequencies, others with Wi-Fi networks or GPS frequencies. Our approach to fighting against jammers is based on direct-sequence spread spectrum. The signal is “buried in noise” and is therefore difficult to detect with a spectrum analyzer.

Does that allow you to effectively block an attack by interference?

FC: This is a real process of resilience. We assume that, to interfere with the signal, the attacker has to find the frequency the two objects are communicating on, and we want to ensure this does not jeopardize communication. By the time the attacker has found the frequency and launched the attack, the spread code has been updated. This approach is what we call “moving target defense”, in which the target of the attack — the sequence of propagation— is regularly updated. It is very difficult for an attacker to complete their attack before the target is updated.

Do you use the same approach to fight against attacks on integrity?

FC: Sort of, but the problem is not the same. In this case, we have an attacker who is able to integrate data in a smart way so that the intrusion is not detected. Take, for example, a tank being filled. The attacker corrupts the sensor so that it tells the system that the tank is already full. He will thus be able to stop the pumps in the treatment station or distillery. We assume that the attacker knows the system very well, which is entirely possible. The attacks on Iranian centrifuges for uranium enrichment showed that an attacker can collect highly sensitive data on the functioning of an infrastructure.

How do you fight against an attacker who is able to go completely unnoticed?

FC: State-of-the-art security systems propose to introduce physical redundancy. Instead of having one sensor for temperature or water level, we have several sensors of different types. This means the attacker has to attack several targets at once. Our research proposes to go even further by introducing virtual redundancy. There would be an auxiliary system that simulates the expected functioning of the machines or structures. If the data sent by the physical sensors differs from the data from the virtual model, then we know something abnormal is happening. This is the principal of a digital twin that provides a reference value in real time. It is similar to the idea of moving target defense, but with an independent virtual target whose behavior varies dynamically.

This work is being carried out in partnership with Technische Universität München (TUM) in the framework of the ASSET project by the German-French Academy for the Industry of the Future. What does this partnership contribute from a scientific point of view?

FC: IMT Atlantique and TUM each bring complementary skills. TUM is more focused on the physical layers and IMT Atlantique focuses more on the communication and service layers. Mines Saint-Étienne is also contributing and collaborating with TUM on attacks on physical components. They are working together on laser attacks on the integrity of components. Each party offers skills that the other does not necessarily have. This complementarity allows solutions to be designed to fight against cyberattacks at different levels and from different points of view. It is crucial in a context where computer systems are becoming more complicated: countermeasures must follow this level of complexity. Dialogue between researchers with different skills stimulates the quality of the protection we are developing.

 

[divider style=”normal” top=”20″ bottom=”20″]

Renewal of the Cybersecurity and critical infrastructures chair (Cyber CNI)

Launched in January 2016 and after 3 years of operation, the Chair for the cybersecurity of critical infrastructures (Cyber CNI) is being renewed for another 3 years thanks to the commitment of its academic and industrial partners. The IMT chair led by IMT Atlantique benefits from partnerships with Télécom ParisTech and Télécom SudParis and support from the Brittany region – a region at the forefront of cutting-edge cybersecurity technology – in the framework of the Cyber Center of Excellence. In the context of a sponsor partnership led by Fondation Mines-Télécom, five industrial partners have committed to this new period: AIRBUS, AMOSSYS, BNP Paribas, EDF and Nokia Bell Labs. The official signing to renew the Chair took place at FIC (International Cybersecurity Forum) in Lille on 22 January 2019.

Read the news on I’MTech: Cyber CNI chair renewed for 3 years

[divider style=”normal” top=”20″ bottom=”20″]

The installation of a data center in the heart of a city, like this one belonging to Interxion in La Plaine Commune in Île-de-France, rarely goes unnoticed.

Data centers: when digital technology transforms a city

As the tangible part of the digital world, data centers are flourishing on the outskirts of cities. They are promoted by elected representatives, sometimes contested by locals, and are not yet well-regulated, raising new social, legal and technical issues. Here is an overview of the challenges this infrastructure poses for cities, with Clément Marquet, doctoral student in sociology at Télécom ParisTech, and Jean-Marc Menaud, researcher specialized in Green IT at IMT Atlantique.

 

On a global scale, information technology contributes to almost 2% of global greenhouse gas emissions, which is as much as civil aviation. In addition, “The digital industry consumes 10% of the world’s energy production” explains Clément Marquet, sociology researcher at Télécom Paristech, who is studying this hidden side of the digital world. The energy consumption required for infrastructure to run smoothly, under the guise of ensuring reliability and maintaining a certain level of service quality, is of particular concern.

With the demand for real-time data, the storage and processing of these data must be carried out where they are produced. This explains why data centers have been popping up throughout the country over the past few years. But not just anywhere. There are close to 150 throughout France. “Over a third of this infrastructure is concentrated in Ile-de-France, in Plaine Commune – this is a record in Europe. It ends up transforming urban areas, and not without sparking reactions from locals,” the researcher says.

Plaine Commune, a European Data Valley

In his work, Clément Marquet questions why these data centers are integrated into urban areas. He highlights their “furtive” architecture, as they are usually built “in new or refitted warehouses, without any clues or signage about the activity inside”. He also looks at the low amount of interest from politicians and local representatives, partly due to their lack of knowledge on the subject. He takes Rue Rateau in La Courneuve as an example. On one side of this residential street, just a few meters from the first houses, a brand-new data center was inaugurated at the end of 2012 by Interxion. The opening of this installation did not run smoothly, as the sociologist explains:

“These 1,500 to 10,000 m2 spaces have many consequences for the surrounding urban area. They are a burden on energy distribution networks, but that is not all. The air conditioning units required to keep them cool create noise pollution. Locals also highlight the risk of explosion due to the 568,000 liters of fuel stored on the roof to power the backup generator, and the fact that energy is not recycled in the city heating network. Across the Plaine Commune agglomeration, there are also concerns regarding the low number of jobs created locally compared with the property occupied. It is no longer just virtual.”

Because these data centers have such high energy needs, the debate in Plaine Commune has centered on the risk of virtual saturation in electricity. Data centers store more energy than they really consume, in order to deal with any shortages. This stored electricity cannot be put to other uses. And so, while La Courneuve is home to almost 27,000 inhabitants, the data center requires the equivalent of a city of 50,000 people. The sociologist argues that there was no consultation of the inhabitants when this building was installed. They ended up taking legal action against the installation. “This raises the question of the viability of these infrastructures in the urban environment. They are invisible and yet invasive”.

Environmentally friendly integration possible

One of the avenues being explored to make these data centers more virtuous and more acceptable is to integrate environmentally friendly characteristics, hooking them up to city heating networks. Data centers could become energy producers, rather than just consumers. In theory, this would make it possible to heat pools or houses. However, it is not an easy operation. In 2015 in La Courneuve, Interxion had announced that it would connect a forthcoming 20,000 m² center. They did not follow through, breaking their promise of a change in their practice. Connecting to the city heating network requires major, complicated coordination between all parties. The project was faced with reluctance by the hosts to communicate on their consumption. Also, hosts do not always have the tools required to recycle heat.

Another possibility is to optimize the energy performance of data centers. Many Green IT researchers are working on environmentally responsible digital technology. Jean-Marc Menaud, coordinator of the collaborative project EPOC (Energy Proportional and Opportunistic Computing systems) and director of the CPER SeDuCe project (Sustainable Data Center), is one of these researchers. He is working on the anticipation of consumption, or predicting the energy needs of an application, combined with anticipating electricity production. “Energy consumption by digital technologies is based on three foundations: one third is due to the non-stop operation of data centers, one third is due to the Internet network itself” he explains, and the last third comes down to user terminals and smart objects.

Read on I’MTech: Data centers, taking up the energy challenge

Since summer 2018, the IMT Atlantique campus has hosted a new type of data center, one devoted to research, and available for use by the scientific community. “The main objective of SeDuce is to reduce the energy consumed by the air conditioning system. Because in energy, nothing goes to waste, everything can be transformed. If we want the servers to run well, we have to evacuate the heat, which is at around 30-35°C. Air conditioning is therefore vital” he continues. “And in the majority of cases, air conditioning is colossal: for 100 watts required to run a server, another 100 are used to cool it down”.

How does SeDuCe work? The data center, with a 1,000-core or 50-server capacity, is full of sensors and probes closely monitoring temperatures. The servers are isolated from the room in airtight racks. This airtight confinement makes it possible to optimize cooling costs tenfold: for 100 watts used by the servers, only 10 watts are required to cool them. “Soon, SeDuCe will be powered during the daytime by solar panels. Another of our goals is to get users to adapt the way they work according to the amount of energy available. A solution that can absolutely be applied to even the most impressive data centers.” Proof that energy transition is possible via clouds too.

 

Article written by Anne-Sophie Boutaud, for I’MTech.

biais des algorithmes, algorithmic bias

Algorithmic bias, discrimination and fairness

David Bounie, Professor of Economics, Head of Economics and Social Sciences at Télécom ParisTech

Patrick WaelbroeckProfessor of Industrial Economy and Econometrics at Télécom ParisTech and co-founder of the Chair Values and Policies of Personal Information

[divider style=”dotted” top=”20″ bottom=”20″]

The original version of this article was published on the website of the Chair Values and Policies of Personal Information. This Chair brings together researchers from Télécom ParisTech, Télécom SudParis and Institut Mines Télécom Business School, and is supported by the Mines-Télécom Foundation.

[divider style=”dotted” top=”20″ bottom=”20″]

 

[dropcap]A[/dropcap]lgorithms rule our lives. They increasingly intervene in our daily activities – i.e. career paths, adverts, recommendations, scoring, online searches, flight prices – as improvements are made in data science and statistical learning.

Despite being initially considered as neutral, they are now blamed for biasing results and discriminating against people, voluntarily or not, according to their gender, ethnicity or sexual orientation. In the United States, studies have shown that African American people were more penalised in court decisions (Angwin et al., 2016). They are also discriminated against more often on online flat rental platforms (Edelman, Luca and Svirsky, 2017). Finally, online targeted and automated ads promoting job opportunities in the Science, Technology, Engineering and Mathematics (STEM) fields seem to be more frequently shown to men than to women (Lambrecht and Tucker, 2017).

Algorithmic bias raises significant issues in terms of ethics and fairness. Why are algorithms biased? Is bias unpreventable? If so, how can it be limited?

Three sources of bias have been identified, in relation to cognitive, statistical and economic aspects. First, algorithm results vary according to the way programmers, i.e. humans, coded them, and studies in behavioural economics have shown there are cognitive biases in decision-making.

  • For instance, a bandwagon bias may lead a programmer to follow popular models without checking whether these are accurate.
  • Anticipation and confirmation biases may lead a programmer to favour their own beliefs, even though available data challenges such beliefs.
  • Illusory correlation may lead someone to perceive a relationship between two independent variables.
  • A framing bias occurs when a person draws different conclusions from a same dataset based on the way the information is presented.

Second, bias can be statistical. The phrase ‘Garbage in, garbage out’ refers to the fact that even the most sophisticated machine will produce incorrect and potentially biased results if the input data provided is inaccurate. After all, it is pretty easy to believe in a score produced by a complex proprietary algorithm and seemingly based on multiple sources. Yet, if the data set based on which the algorithm is trained to learn to categorise or predict is partial or inaccurate, as is often the case with fake news, trolls or fake identities, results are likely to be biased. What happens if the data is incorrect? Or if the algorithm is trained using data from US citizens, who may behave much differently from European citizens? Or even, if certain essential variables are omitted? For instance, how might machines encode relational skills and emotional intelligence (which are hard to get for machines as they do not feel emotions), leadership skills or teamwork in an algorithm? Omitted variables may lead an algorithm to produce a biased result for the simple reason the omitted variables may be correlated with the variables used in the model. Finally, what happens when the training data comes from truncated samples or is not representative of the population that you wish to make predictions for (sample-selection bias)? In his Nobel Memorial Prize-winning research, James Heckman showed that selection bias was related to omitted-variable bias. Credit scoring is a striking example. In order to determine which risk category a borrower belongs to, algorithms rely on data related to people who were eligible for a loan in a particular institution – they ignore files of people who were denied credit, did not need a loan or got one in another institution.

Third, algorithms may bias results for economic reasons. Think of online automated advisors specialised in selling financial services. They can favour the products of the company giving the advice, at the expense of the consumer if these financial products are more expensive than the market average. Such situation is called price discrimination. Besides, in the context of multi-sided platforms, algorithms may favour third parties who have signed agreements with the platform. In the context of e-commerce, the European Commission recently fined Google 2.4bn euros for promoting its own products at the top of search results on Google Shopping, to the detriment of competitors. Other disputes have occurred in relation to the simple delisting of apps in search results on the Apple Store or to apps being downgraded in marketplaces’ search results.

Algorithms thus come with bias, which seems unpreventable. The question now is: how can bias be identified and discrimination be limited? Algorithms and artificial intelligence will indeed only be socially accepted if all actors are capable of meeting the ethical challenges raised by the use of data and following best practice.

Researchers first need to design fairer algorithms. Yet what is fairness, and which fairness rules should be applied? There is no easy answer to these questions, as debates have opposed researchers in social science and those in philosophy for centuries. Fairness is a normative concept, many definitions of which are incompatible. For instance, compare individual fairness and group fairness. One simple criterion of individual fairness is that of equal opportunity, the principle according to which individuals with identical capacities should be treated similarly. However, this criterion is incompatible with group fairness, according to which individuals of the same group, such as women, should be treated similarly. In other words, equal opportunity for all individuals cannot exist if a fairness criterion is applied on gender. These two notions of fairness are incompatible.

A second challenge faces companies, policy makers and regulators, whose duty it is to promote ethical practices – transparency and responsibility – through an efficient regulation of the collection and use of personal data. Many issues arise. Should algorithms be transparent and therefore audited? Who should be responsible for the harm caused by discrimination? Is the General Data Protection Regulation fit for algorithmic bias? How could ethical constraints be included? Admittedly they could increase costs for society at the microeconomic level, yet they could help lower the costs of unfairness and inequality stemming from an automated society that wouldn’t comply with the fundamental principles of unbiasedness and lack of systematic discrimination.

Read on I’MTech Ethics, an overlooked aspect on algorithms?

The water footprint of a product has long been difficult to evaluate. Where does the water come from? What technology is used to process and transport it? These are among the questions researchers have to answer in better measuring environmental impact.

The many layers of our environmental impact

An activity can have many consequences for the environment, from its carbon footprint, water consumption, pollution, changes to biodiversity, etc. Our impacts are so complex that an entire field of research has been developed to evaluate and compare them. At IMT Mines Alès, a team of researchers is working on tools to improve the way we measure our impacts and therefore provide as accurate a picture as possible of our environmental footprint. Miguel Lopez-Ferber, the lead researcher of this team, presents some of the most important research questions in improving our methods of environmental evaluation. He also explains the difficulty in setting up indicators and having them approved for efficient decision making.

 

Can we precisely evaluate all of the impacts of a product on the environment?

Miguel Lopez-Ferber: We do know how to measure some things. A carbon footprint, or the pollution generated by a product or a service. The use of phytosanitary products is another impact we know how to measure. However, some things are more difficult to measure. The impacts linked to the water consumption required in the production of a product have been extremely difficult to evaluate. For a given use, one liter of water taken from a region may generate very different impacts from a liter of water taken from another region. The type of water, the climate, and even the source of the electricity used to extract, transport and process it will be different. We now know how to do this better, but not yet perfectly. We also have trouble measuring the impact on biodiversity due to humans’ development of a territory.

Is it a problem that we cannot fully measure our impact?

MLF: If we don’t take all impacts into account, we risk not noticing the really important ones. Take a bottle of fruit juice, for example. If we only look at the carbon footprint, we will choose a juice made from locally-grown fruit, or one from a neighboring country. Transport does play a major part in a carbon footprint. However, local production may use a water source which is under greater stress than one in a country further away. Perhaps it also has a higher impact on biodiversity. We can have a distorted view of reality.

What makes evaluating the water footprint of a product difficult?

MLF: What is difficult is to first differentiate the different types of water. You have to know where the water comes from. The impact won’t be the same for water taken from a reserve under the Sahara as for water from the Rhône. The scarcity of the water must be evaluated for each production site. Another sensitive point is understanding the associated effects. In a given region, the mix of water used may correspond to 60% surface water, 30% river water and 10% underground water, but these figures do not give us the environmental impacts. Each source then has to be analyzed to determine whether taking the water has consequences, such as drying out a reserve. We also need to be able to differentiate the various uses of the water in a given region, as well as the associated socio-economic conditions, which have a significant impact on the choice of technology used in transporting and processing the water.

What can we determine in the impact of water use?

MLF: Susana Leão’s thesis, co-supervised by my colleague Guillaume Junqua, has provided a regional view of inventories. It presents the origin of the water in each region according to the various household, agricultural or industrial uses, along with the associated technologies. Before, we only had average origin data by continent: I had the average water consumption for one kilogram of steel produced in Europe, without knowing if the water came from a river or from a desalination process, for example. Things became more complicated when we looked at the regional details. We now know how to differentiate the composition of one country’s water mix from another’s, and even to differentiate between the major hydrographic basins. Depending on the data available, we can also focus on a smaller area.

In concrete terms, how does this work contribute to studying impacts?

MLF: As a result, we can differentiate between production sites in different locations. Each type of water on each site will have different impacts, and we are able to take this into account. In addition, in analyzing a product like our bottle of fruit juice, we can categorize the impacts into those which are introduced on the consumption site, in transport or waste, for example, and those which are due to production and packaging. In terms of life cycle analysis, this helps us to understand the consequences of an activity on its own territory as well as other territories, near or far.

Speaking of territories, your work also looks at habitat fragmentation, what does this mean?

MLF: When you develop a business, you need a space to build a factory. You develop roads and transform the territory. These changes disturb ecosystems. For instance, we found that modifications made to a particular surface area may have very different impacts. For example, if you simply decrease the surface area of a habitat without splitting it, the species are not separated. On the contrary, if you fragment the area, species have trouble traveling between the different habitats and become isolated. We are therefore working on methods for evaluating the distribution of species and their ability to interconnect across different fragments of habitat.

With the increasing amount of impact indicators, how do we take all of these footprints into account?

MLF: It’s very complicated. When a life cycle analysis of a product such as a computer is made, this includes a report containing around twenty impact categories: climate change, pollution, heavy metal leaching, radioactivity, water consumption, eutrophication of aquatic environments, etc. However, decision-makers would rather see fewer parameters, so they need to be aggregated into categories. There are essentially three categories: impact on human health, impact on ecosystems, and overuse of resources. Then, the decisions are made.

How is it possible to decide between such important categories?

MLF: Impact reports always raise the question of what decision makers want to prioritize. Do they want a product or service which minimizes energy consumption? Waste production? Use of water resources? Aggregation methods are already based on value scales and strong hypotheses, meaning that the final decision is too. There is no way of setting a universal scale, as the underpinning values are not universal. The weighting of the different impacts will depend on the convictions of a particular decision maker and the geographical location. The work involves more than just traditional engineering, but a sociological aspect too. This is when arbitration enters the realm of politics.

Using personalised services without privacy loss: what solutions does technology have to offer?

Online services are becoming more and more personalised. This transformation designed to satisfy the end-user might be seen as an economic opportunity, but also as a risk, since personalised services usually require personal data to be efficient. Two perceptions that do not seem compatible. Maryline Laurent and Nesrine Kaâniche, researchers at Telecom SudParis and members of the Chair Values and Policies of Personal Information, tackle this tough issue in this article. They give an overview of how technology can solve this equation by allowing both personalization and privacy. 

[divider style=”normal” top=”20″ bottom=”20″]

This article has initially been published on the Chair Values and Policies of Personal Information website.

[divider style=”normal” top=”20″ bottom=”20″]

 

[dropcap]P[/dropcap]ersonalised services have become a major stake in the IT sector as they require actors to improve both the quality of the collected data and their ability to use them. Many services are running the innovation race, namely those related to companies’ information systems, government systems, e-commerce, access to knowledge, health, energy management, leisure and entertaining. The point is to offer end-users the best possible quality of experience, which in practice implies qualifying the relevance of the provided information and continuously adapting services to consumers’ uses and preferences.

Personalised services offer many perks, among which targeted recommendations based on interests, events, news, special offers for local services or goods, movies, books, and so on. Search engines return results that are usually personalised based on a user’s profile and actually start personalising as soon as a keyword is entered, by identifying semantics. For instance, the noun ‘mouse’ may refer to a small rodent if you’re a vet, a stay mouse if you’re a sailor, or a device that helps move the cursor on a computer screen if you’re an Internet user. In particular, mobile phone applications use personalisation; health and wellness apps (e.g. the new FitBit and Vivosport trackers) can come in very handy as they offer tips to improve one’s lifestyle, help users receive medical care remotely, or warn them on any possible health issue they detect as being related to a known illness.

How is personalisation technologically carried out?

When surfing on the Internet and using mobile phone services or apps, users are required to authenticate. Authentication allows to connect their digital identity with the personal data that is saved and collected from exchanges. Some software packages also include trackers, such as cookies, which are exchanged between a browser and a service provider or even a third party and allow to track individuals. Once an activity is linked to a given individual, a provider can easily fill up their profile with personal data, e.g. preferences and interests, and run efficient algorithms, often based on artificial intelligence (AI), to provide them with a piece of information, a service or targeted content. Sometimes, although more rarely, personalisation may rely solely on a situation experienced by a user – the simple fact they are geolocated in a certain place can trigger an ad or targeted content to be sent to them.

What risks may arise from enhanced personalisation?

Enhanced personalisation causes risks for users in particular. Based on geolocation data only, a third party may determine that a user goes to a specialised medical centre to treat cancer, or that they often spend time at a legal advice centre, a place of worship or a political party’s local headquarters. If such personal data is sold on a marketplace[1] and thus made accessible to insurers, credit institutions, employers and lessors, their use may breach user privacy and freedom of movement. And this is just one kind of data. If these were to be cross-referenced with a user’s pictures, Internet clicks, credit card purchases and heart rate… What further behavioural conclusions could be drawn? How could those be used?

One example that comes to mind is price discrimination,[2] i.e. charging different prices for the same product or service to different customers according to their location or social group. Democracies can also suffer from personalisation, as the Cambridge Analytica scandal has shown. In April 2018, Facebook confessed that U.S. citizens’ votes had been influenced through targeted political messaging in the 2016 election.

Responsible vs. resigned consumers

As pointed out in a survey carried out by the Chair Values and Policies of Personal Information (CVPIP) with French audience measurement company Médiamétrie,[3] some users and consumers have adopted data protection strategies, in particular by using software that prevents tracking or enables anonymous online browsing… Yet this requires them to make certain efforts. According to their purpose, they either choose a personalised service or a generic one to gain a certain control over their informational profile.

What if technology could solve the complex equation opposing personalised services and privacy?

Based on this observation, the Chair’s research team carried out a scientific study on Privacy Enhancing Technologies (PETs). In this study, we list the technologies that are best able to meet needs in terms of personalised services, give technical details about them and analyse them comparatively. As a result, we suggest classifying these solutions into 8 families, which are themselves grouped into the following 3 categories:

  • User-oriented solutions. Users manage the protection of their identity by themselves by downloading software that allows them to control outgoing personal data.Protection solutions include attribute disclosure minimisation and noise addition, privacy-preserving certification,[4] and secure multiparty calculations (i.e. distributed among several independent collaborators).
  • Server-oriented solutions. Any server we use is strongly involved in personal data processing by nature. Several protection approaches focus on servers, as these can anonymise databases in order to share or sell data, run heavy calculations on encrypted data upon customer request, implement solutions for automatic data self-destruction after a certain amount of time, or Privacy Information Retrieval solutions for non-intrusive content search tools that confidentially return relevant content to customers.
  • Channel-oriented solutions. What matters here is the quality of the communication channel that connects users with servers, be it intermediated and/or encrypted, and the quality of the exchanged data, which may be damaged on purpose. There are two approaches to such solutions: securing communications and using trusted third parties as intermediators in a communication.

Some PETs are strictly in line with the ‘data protection by design’ concept as they implement data disclosure minimisation or data anonymisation, as required by Article 25-1 of the General Data Protection Regulation (GDPR).[5] Data and privacy protection methods should be implemented at the earliest possible stages of conceiving and developing IT solutions.

Our state of the art shows that using PETs raises many issues. Through a cross-cutting analysis linking CVPIP specialists’ different fields of expertise, we were able to identify several of these challenges:

  • Using AI to better include privacy in personalised services;
  • Improving the performance of existing solutions by adapting them to the limited capacities of mobile phone personalised services;
  • Looking for the best economic trade-off between privacy, use of personal data and user experience;
  • Determining how much it would cost industrials to include PETs in their solutions in terms of development, business model and adjusting their Privacy Impact Assessment (PIA);
  • PETs seen as a way of bypassing or enforcing legislation.
Glioblastoma is a type of brain tumor. It remains difficult to treat. Image: Christaras A / Wikimedia.

Glioblastoma: electric treatment?

At Mines Saint-Étienne, the ATPulseGliome project is looking into a new form of cancer treatment. This therapeutic approach is aimed at fighting glioblastoma, an especially aggressive form of brain cancer, using electrical stimulation. It could eventually increase the life expectancy of glioblastoma patients in comparison with chemotherapy and radiotherapy treatment. 

 

Glioblastoma is a rare form of brain cancer. Of the 400,000 new cases of cancer recorded each year in France, it affects 2,400 people, or 0.6%. Unfortunately, it is also one of the more severe forms, with the life expectancy of glioblastoma patients at between 12 to 15 months with treatment. To improve the survival chances of those affected, Mines Saint-Étienne is leading the ATPulseGliome project, in collaboration with the University of Limoges and with funding from the EDF Foundation. The team of researchers, led by Rodney O’Connor, is testing an approach using electric fields to find new types of treatment.

The glial cells are located around the neurons in our brain. These are the cells affected by glioblastoma. “One particular type of glial cell is affected: the astrocytes” says Hermanus Ruigrok, a cellular biologist at Mines Saint-Étienne and researcher on the ATPulseGliome project. The normal role of astrocytes is to provide nutrients to neurons, repair brain lesions, and ensure separation between the nervous system and blood circulation system. Like all cells, astrocytes regenerate by division. Glioblastoma survive when the astrocytes behave abnormally, dividing in an uncontrollable manner.

Targeting glioblastoma without affecting healthy cells

ATPulseGliome is looking into a form of treatment based on electrical stimulation of cancer cells. “Healthy astrocytes are not sensitive to electricity, but glioblastoma cells are” explains Hermanus Ruigrok. This difference is the foundation of the treatment strategy, which will target the cancer cells only, not the astrocytes and other healthy cells. Glioblastoma cancer cells have a larger amount of proteins in their membrane which are sensitive to electricity.

These proteins act as doors, letting ions in and out, thus enabling communication between the cell and the outside environment. This door malfunctions under the effect of electrical stimulus. An unusually high number of ions then enter the cancer cell, causing harmful effects. “This strategy will allow us to destroy only the cancer cells, and not the healthy astrocytes, which are not sensitive to the electrical stimulus” Hermanus Ruigrok highlights.

Glioblastoma cells, marked with fluorescence.

Glioblastoma cells, marked with fluorescence.

 

It is still much too early to trial this technique on patients. The ATPulseGliome team is working on glioblastoma cell lines. Initially, these cells come from a patient with this form of cancer, but they are cultivated, reproduced and isolated in in vitro experiments. By eliminating the complex molecular interactions present in a real patient, this first step helps to clarify the scientific objectives and test the feasibility of in vivo tests. During this phase, researchers will look at the different types of electrodes to be used for sending an electrical field, determine the characteristics of the electrical signal required in stimulating cells, and measure the initial responses of glioblastoma to the electrical impulses.

To complete these steps, the team at Mines Saint-Étienne is working with Institut des neurosciences de la Timone in Marseille. “We want to take on as many specialists as possible, as the project requires a range of skills: biologists, electronics engineers, neurologists, surgeons, etc.”, Hermanus Ruigrok explains. Although it is a lengthy procedure, this multidisciplinary approach could increase the life expectancy of patients. “We can’t say in advance how much more effective this electrical field approach will be compared with the chemotherapy and radiotherapy currently used”, the researcher explains. “Although it may be difficult to fully cure this cancer, being able to monitor and limit its development in order to significantly increase the life expectancy of those affected by glioblastoma would be a form of satisfaction.”

 

 

AI4EU: a project bringing together a European AI community

Projets européens H2020On January 10th, the AI4EU project (Artificial Intelligence for the European Union), an initiative of the European Commission, was launched in Barcelona. This 3-year project led by Thalès, with a budget of €20 million, aims to bring Europe to the forefront of the world stage in the field of artificial intelligence. While the main goal of AI4EU is to gather and coordinate the European AI community as a single entity, the project also aims to promote EU values: ethics, transparency and algorithmic explainability. TeraLab, the AI platform at IMT, is an AI4EU partner. Interview with its director, Anne-Sophie Taillandier.

 

What is the main goal of the AI4EU H2020 project?

Anne-Sophie Taillandier: To create a platform bringing together the Artificial Intelligence (AI) community and embodying European values: sovereignty, trust, responsibility, transparency, explainability… AI4EU seeks to make AI resources, such as data repositories, algorithms and computing power, available for all users in every sector of society and the economy. This includes everyone from citizens interested in the subject, SMEs seeking to integrate AI components, start-ups, to large groups and researchers—all with the goal of boosting innovation, reinforcing European excellence and strengthening Europe’s leading position in the key areas of artificial intelligence research and applications.

What is the role of this platform?

AST: It primarily plays a federating role. AI4EU, with 79 members in 21 EU countries, will provide a unique entry point for connecting with existing initiatives and accessing various competences and expertise pooled together in a common base. It will also play a watchdog role and will provide the European Commission with the key elements it needs to orient its AI strategy.

TeraLab, the IMT Big Data platform, is also a partner. How will it contribute to this project?

AST: Along with Orange, TeraLab coordinates the “Platform Design & Implementation” work package. We provide users with experimentation and integration tools that are easy to use without prior theoretical knowledge, which accelerates the start-up phase for projects developed using the platform. For common questions that arise when launching a new project, such as the necessary computing power, data security, etc., TeraLab offers well-established infrastructure that can quickly provide solutions.

Which use cases will you work on?

AST: The pilot use cases focus on public services, the Internet of Things (IoT), cybersecurity, health, robotics, agriculture, the media and industry. These use cases will be supplemented by open calls launched over the course of the project. These open calls will target companies and businesses that want to integrate platform components into their activities. They could benefit from the sub-grants provided for in the AI4EU framework: the European Commission funds the overall project, which itself funds companies proposing convincing project through the total dedicated budget of €3 million.

Ethical concerns represent a significant component of European reflection on AI. How will they be addressed?

AST: They certainly represent a central issue. The project governance will rely on a scientific committee, an industrial committee as well as an ethics committee that will ensure transparency, reproducibility and explainability by means of tools including charters, indicator and labels. Far from representing an obstacle to business development, the emphasis on ethics creates added value and a distinguishing feature for this platform and community. The guarantee that the data will be protected and will be used in an unbiased manner represents a competitive advantage for the European vision. Beyond data protection, other ethical aspects such as gender parity in AI will also be taken into account.

What will the structure and coordination look like for this AI community initiated by AI4EU?

AST: The project members will meet at 14 events in 14 different countries to gather as many stakeholders as possible throughout Europe. Coordinating the community is an essential aspect of this project. Weekly meetings are also planned. Every Thursday morning, as part of a “world café”, participants will share information, feedback, and engage in discussions between suppliers and users. A digital collaborative platform will also be established to facilitate interactions between stakeholders. In other words, we are sure to keep in touch!

 

AI4EU consortium members