Since the enthusiasm for AI in healthcare brought on by IBM’s Watson, many questions on bias and discrimination in algorithms have emerged. Photo: Wikimedia.

Ethical algorithms in health: a technological and societal challenge

The possibilities offered by algorithms and artificial intelligence in the healthcare field raise many questions. What risks do they pose? How can we ensure that they have a positive impact on the patient as an individual? What safeguards can be put in place to ensure that the values of our healthcare system are respected?

 

A few years ago, Watson, IBM’s supercomputer, turned to the health sector and particularly oncology. It has paved the way for hundreds of digital solutions, ranging from algorithms for analyzing radiology images to more complex programs designed to help physicians in their treatment decisions. Specialists agree that these tools will spark a revolution in medicine, but there are also some legitimate concerns. The CNIL, in its report on the ethical issues surrounding algorithms and artificial intelligence, stated that they “may cause bias, discrimination, and even forms of exclusion”.

In the field of bioethics, four basic principles were announced in 1978: justice, autonomy, beneficence and non-maleficence. These principles guide research on the ethical questions raised by new applications for digital technology. Christine Balagué, holder of the Smart Objects and Social Networks chair at the Institut Mines-Télécom Business School, highlights a pitfall, however: “the issue of ethics is tied to a culture’s values. China and France for example have not made the same choices in terms of individual freedom and privacy”. Regulations on algorithms and artificial intelligence may therefore not be universal.

However, we are currently living in a global system where there is no secure barrier to the dissemination of IT programs. The report made by the CCNE and the CERNA on digital technology and health suggests that the legislation imposed in France should not be so stringent as to restrict French research. This would come with the risk of pushing businesses in the healthcare sector towards digital solutions developed by other countries, with even less controlled safety and ethics criteria.

Bias, value judgments and discrimination

While some see algorithms as flawless, objective tools, Christine Balagué, who is also a member of CERNA and the DATAIA Institute highlights their weaknesses: “the relevance of the results of an algorithm depends on the information it receives in its learning process, the way it works, and the settings used”. Bias may be introduced at any of these stages.

Firstly, in the learning data: there may be an issue of representation, like for pharmacological studies, which are usually carried out on 20-40-year-old Caucasian men. The results establish the effectiveness and tolerance of the medicine for this population, but are not necessarily applicable to women, the elderly, etc. There may also be an issue of data quality: their precision and reliability are not necessarily consistent depending on the source.

Data processing, the “code”, also contains elements which are not neutral and may reproduce value judgments or discriminations made by their designers. “The developers do not necessarily have bad intentions, but they receive no training in these matters, and do not think of the implications of some of the choices they make in writing programs” explains Grazia Cecere, economics researcher at the Institut Mines-Télécom Business School.

Read on I’MTech: Ethics, an overlooked aspect of algorithms?

In the field of medical imaging, for example, determining an area may be a subject of contention: A medical expert will tend to want to classify uncertain images as “positive” so as to avoid missing a potential anomaly which could be cancer, which therefore increases the number of false-positives. On the contrary, a researcher will tend to maximize the relevance of their tool in favor of false-negatives. They do not have the same objectives, and the way data are processed will reflect this value judgment.

Security, loyalty and opacity

The security of medical databases is a hotly-debated subject, with the risk that algorithms may re-identify data made anonymous and may be used for malevolent or discriminatory purposes (by employers, insurance companies, etc.). But the security of health data also relies on individual awareness. “People do not necessarily realize that they are revealing critical information in their publications on social networks, or in their Google searches on an illness, a weight problem, etc.”, says Grazia Cecere.

Applications labeled for “health” purposes are often intrusive and gather data which could be sold on to potentially malevolent third parties. But the data collected will also be used for categorization by Google or Facebook algorithms. Indeed, the main purpose of these companies is not to provide objective, representative information, but rather to make profit. In order to maintain their audience, they need to show that audience what it wants to see.

The issue here is in the fairness of algorithms, as called for in France in 2016 in the law for a digital republic. “A number of studies have shown that there is discrimination in the type of results or content presented by algorithms, which effectively restricts issues to a particular social circle or a way of thinking. Anti-vaccination supporters, for example, will see a lot more publications in their favor” explains Grazia Cecere. These mechanisms are problematic, as they get in the way of public health and prevention messages, and the most at-risk populations are the most likely to miss out.

The opaque nature of deep learning algorithms is also an issue for debate and regulation. “Researchers have created a model for the spread of a virus such as Ebola in Africa. It appears to be effective. But does this mean that we can deactivate WHO surveillance networks, made up of local health professionals and epidemiologists who come at a significant cost, when no-one is able to explain the predictions of the model?” asks Christine Balagué.

Researchers from both hard sciences and social sciences and humanities are looking at how to make these technologies responsible. The goal is to be able to directly incorporate a program which will check that the algorithm is not corrupt and will respect the principles of bioethics. A sort of “responsible by design” technology, inspired by Asimov’s three laws of robotics.

 

Article initially written in French by Sarah Balfagon, for I’MTech.

An example of the micro-structures produced using a single-beam laser nano printer by the company Multi-Photon Optics, a member of the consortium.

Nano 3D Printers for Industry

Projets européens H2020The 3-year H2020 project PHENOmenon, launched in January 2018, is developing nano 3D printers capable of producing micro and nano-structures (particularly those with an optical function), while adhering to limited production times. Kevin Heggarty is a researcher at IMT Atlantique, one of the project partners along with three other European research institutes and eight industrial partners, including major groups and SMEs. He offers a closer look at this project and the scientific challenges involved.

 

What is the goal of the H2020 PHENOmenon project?

Kevin Heggarty: The goal of this project is to develop nano 3D printers for producing large, high-resolution objects. The term “large” is relative, since here we are referring to objects that only measure a few square millimeters or centimeters with nanometric resolution—one nanometer measures one millionth of a millimeter. We want to be able to produce these objects within time frames compatible with industry requirements.

What are the scientific obstacles you must overcome?

KH: Currently there are nano 3D printers that work with a single laser beam. The manufacturing times are very long. The idea with PHENOmenon is first to project hundreds of laser beams at the same time. We are currently able to simultaneously project over one thousand. The long-term goal is to project millions of laser beams to significantly improve production speeds.

What inspired the idea for this project?

KH: Parallel photoplotting is an area of expertise that has been developed in IMT Atlantique laboratories for over 15 years. This involves using light beams to trace patterns on photosensitive materials, like photographic film. Up until now, this was done using flat surfaces. The chemistry laboratory of ENS Lyon has developed highly sensitive material used to produce 3D objects. It was in our collaboration with this laboratory that we decided to test an idea—that of combining parallel photoplotting with the technology from ENS Lyon to create a new manufacturing process.

After demonstrating that it was possible to obtain hundreds of cubic microns by simultaneously projecting a large number of laser beams on highly sensitive material, we reached out to AIMEN, an innovation and technology center specialized in advanced manufacturing materials and technologies located in Vigo, Spain. Their cutting-edge equipment for laser machining is well-suited to the rapid manufacturing of large objects. With its solid experience in applying for and leading European projects, AIMEN became the coordinator of PHENOmenon. The other partners are industrial stakeholders, the end users of the technology being developed in the context of this project.

What expectations do the industrial partners have?

KH: Here are a few examples: The Fábrica Nacional de Moneda y Timbre, a public Spanish company, is interested in manufacturing security holograms on bank notes. Thalès would like to cover the photovoltaic panels it markets with micro and nano-structured surfaces produced using nano-printers. The PSA Group wants to equip the passenger compartment of its vehicles with holographic buttons. Design LED will introduce these micro-structured 3D components in its lighting device, a plastic film used to control light…

What are the next steps in this project?

KH: The project partners meet twice a year. IMT Atlantique will host one of these meetings on its Brest campus in the summer of 2020. In terms of new developments in research, the chemistry laboratory of ENS Lyon is preparing a new type of resin. At IMT Atlantique, we are continuing our work. We are currently able to simultaneously project a large number of identical laser beams. The goal is to succeed in project different types of laser beams and then produce prototype nano-structures for the industrial partners.

 

 

Quelles sont les bonnes pratiques à adopter en matière de mixité pour inclure les femmes dans les milieux technologiques ?

In IT professions, diversity is all about inclusion, not exclusion

Ideas about diversity are often fallacious. Sociology has shown that women are seen as being responsible for their own inclusion in places where they are a minority. Chantal Morley is conducting research in this field at Institut Mines-Télécom Business School. She is especially interested in diversity in technological fields, whether they be companies, universities, engineering schools, etc. In this interview for I’MTech, she goes over the right approaches in promoting diversity, but also the wrong ones. 

 

You suggest that we should no longer approach the issue of diversity through a filter of exclusion, and instead through inclusion. What is the difference?

Chantal Morley: This idea comes from seeing the low impact of the measures taken in the last 20 years. They are aimed at women solely in the form of: “you must keep informed, you have to make an effort to be included”.  But men don’t have to make these efforts, and history tells us that at one point in time, women didn’t have to either. These calls and injunctions target women outside working spaces or territories: this is what we call the exclusion filter. The idea is that women are excluded and should solve the problem themselves. Thinking in terms of inclusion means looking at practices in companies, discussion spaces and education. It is about questioning equality mechanisms, attitudes and representations.

Read on I’MTech: Why women have become invisible in IT professions

In concrete terms, what difference will this make?

CM: The reason women do not enter IT professions is not because they are not interested or that they don’t make the effort, but because the field is a highly masculine one. By looking at what is going on inside an organization, we see that technical professions, from which women have long been excluded, affirm masculinity. Still today, there is a latent norm, often subconsciously activated, which tells us that a man will be more at ease with technical issues. Telling women that they are foreign to these issues, through small signs, contributes to upholding this norm. This is how we have ended up with a masculine technical culture in companies, schools and universities. This culture is constantly reinforced by everyday interactions – between students, with teachers, between teachers, in institutional communication. The impact of these interactions is even stronger when their socio-sexual nature goes unquestioned. This is why practices must be questioned, which implies looking at what is going on inside organizations.

What are the obstacles to diversity in technological fields?

CM: Organizations send out signals, marking their territory. On company websites, it is often men who are represented in high-responsibility jobs. In engineering schools, representations are also heavily masculine, from school brochures to gala posters produced by student associations. The masculine image dominates. There is also a second dimension, that of recognition. In technology professions, women are often suspected of illegitimacy, they are often required to prove themselves. Women who reach a high status in the hierarchy of a company, or who excel in elite courses, feel this discreet suspicion and it can make them doubt themselves.

What does a good inclusion policy involve?

CM: We carried out a sociological study on several inclusion policies used in organizations. A successful example is that of Carnegie-Mellon University in the United States. They were first asked to undertake an analysis of their practices. They realized that they were setting up barriers to women entering technology courses. For example, in selecting students, they were judging applicants on their prior experience in IT, things that are not taught in schools. They expected students to have skills inherited from a hacker culture or other social context favoring the development of these skills. However, the university realized that not only are these skills usually shared in masculine environments, but also that they are not a determining factor in successful completion of studies. They reviewed their admission criteria. This is a good example of analyzing the space and organization in terms of inclusion. In one year, the percentage of female students in IT rose from 7% to 16%, reaching a stable level of 40% after four years. The percentage of female applicants accepted who then chose to enroll more than doubled in a few years.

Read on I’MTech: Gender diversity in ICT as a topic of research

Once women have joined these spaces, is the problem solved?

CM: Not at all. Once again, Carnegie-Mellon University is a good example. On average, female students were giving up their IT studies twice as often as men. This is where notions of culture and relations come in. New students were subject to rumors about quotas. The men believed the women were only there to satisfy statistics, because they themselves had been conditioned by clichés on the respective skills of men and women in IT. The university’s response was a compulsory first-year course on gender and technologies, to break down preconceived ideas.

How necessary is it to use compulsory measures?

CM: There are two major reasons. On the one hand, stereotypes are even stronger when they are activated subconsciously: we therefore have to create conditions under which we can change the views of people within a group. In this case, the compulsory course on gender or the differentiated first-year courses enable all students to take the same courses in the second year, boost self-confidence and create a common knowledge base. The measure improved the group’s motivation and their desire to move forward. Cultural change is generally slow, especially when the non-included population is strongly in a minority. This is why we have to talk about quotas. Everyone is very uneasy with this idea, but it is an interim solution, which will lead to rapid progress in the situation. For example, the Norwegian University of Science and Technology (NTNU), another case of successful inclusion, decided to open additional places for women only. Along with a very clear communication strategy, this approach saw female student numbers rise from 6% to 38% in one year and saw the creation of a “community” of female engineers.  The percentage of women admitted stabilized, and the quotas were abandoned after three years. The issue of separate spaces is also interesting. Carnegie-Mellon, for example, launched an association for female IT student which it still supports. With help from the school’s management, this association organizes events with professional females, as women felt excluded from the traditional alumni networks. It has become the largest student association on campus, and now that the transition period is over, they are gradually opening up to other forms of diversity, such as ethnic.

Is there such a thing as bad inclusion measures?

CM: Generally speaking, all measures aimed at promoting women as women are problematic. The Norwegian University of Science and Technology is an example of this. In 1995, it launched an inclusion program attracting women by taking the “difference” approach, the idea that they were complementary to men. This program was statistically successful: there was an increase in the number of women in technology courses. Sociological studies also showed that women felt wanted in these training spaces. But the studies also showed that these women were embarrassed, the notion of complementarity implied that the university considered that women’s strong points were different from men’s. This is not true, and here we see the fundamental difference with Carnegie-Mellon, which attracted women by breaking down this type of cliché.

Since 1995, has this stance on complementarity changed?

CM: At the Norwegian University of Science and Technology, yes. After the reports from female students, the approach was largely modified. Unfortunately, the idea of complementarity is still too present, especially in companies. All too often, we hear things like “having a woman in a team improves communication” or “a feminine presence softens the atmosphere”. Not only is there no sociological reality behind these ideas, but also they impose qualities women are expected to have. This is the performative side of gender: we conform to what is considered appropriate and expected of us. A highly talented woman in a role which does not require any particular communication skills will be judged preferentially on these criteria rather than on her actual tasks. This representation must be broken down. Including women is not important because they improve the atmosphere in a team. It is important because they represent as large a talent pool as men.

 

digital Innovation Hub

Artificial Intelligence: TeraLab becomes a European “Digital Innovation Hub”

TeraLab becomes one of the 30 Digital Innovation Hubs (DIH) selected by the European Union in artificial intelligence. This new recognition consolidates the place of the IMT’s TeraLab platform in the field of AI as well as its impact on business transformation.

 

On February 28 last year, the European Commission, via the AI DIH Network project, recognized TeraLab as one of the 30 Digital Innovation Hubs specializing in artificial intelligence in Europe (along with DigiHall – of which IMT is a member – and DigiWest in France).

What is a DIH?

DIHs are unique entry points which help businesses become more competitive in their production process or the way that their services use digital technologies. DIHs offer direct access to knowledge, expertise and the most recent technologies, in order to help their partners try, test and experiment with digital innovations.

TeraLab has more than 60 projects in research, innovation and education

The label thus rewards the quality of TeraLab, a “trusted third-party” platform which was developed at IMT.  It proposes state of the art tools for collaborations between businesses, startups, SMEs and researchers, with the aim of supporting and accelerating projects in AI and big data.

Since January 2014, TeraLab has been involved in more than 60 projects in research, innovation and education. It boasts significant European recognition with BDVA’s iSpace label and also with European projects such as BOOST 4.0 (connected factories), MIDIH (Manufacturing Industry Digital Innovation Hubs) or AI4EU (European Platform for AI on Demand).

To know more about TeraLab

Energysquare: charging your telephone has never been so simple!

Start-up company Energysquare has created a wireless charging device for tablets and cellphones. Using a simple mechanism combining a sticker and a metal plate, devices can be charged by conduction. Energysquare, which is incubated at Télécom ParisTech, will soon see its technology tested in hotels. The company now also aims to export and adapt its product to other smart objects.

 

Are you fed up of jumbles of wire in the house or on your desk? You probably never deliberately knotted your phone charger, and yet, when you want to use it, the wire is all tangled up! This article brings you good news: your fight against electric cables has come to an end! Start-up company Energysquare, incubated at Télécom ParisTech since 2015, has revolutionized electrical charging for mobile devices such as smartphones and tablets by disposing with current chargers. Your devices can now all be charged together on a single pad plugged into the mains.

“We based our work on the fact that the devices we use spend a lot of time on the surfaces around us, such as a desk or bedside table. Our idea was to be able to charge them over a whole surface and no longer with a cable at a single point,” explains Timothée Le Quesne, one of the designers of the Energysquare concept. We took a closer look at this vital accessory for future smart houses.

Easy-to-use conductive charging

The first question that comes to mind is how does it work? The technology is composed of two parts. Firstly, the pad, which is a 30×30-centimetre metal plate with independent conductive squares. It is plugged into the mains and can be placed on any surface as desired. The second part is a sticker comprising a flexible conductor with two electrodes and a connector adapted to the charging socket of your device, whether Android or IOS. The sticker is stuck directly on the back of your telephone. No surprises so far… but it is when the two parts come into contact that the magic happens.

When the electrodes come in contact with the charging surface, the system detects the device and sends it a signal to check that it is a battery. An electrical potential of 5 volts is produced between the two squares connected to the electrodes, allowing conductive charging. “The geometric format of the pad has been designed so that the two squares are automatically in contact with the electrodes. That way, there is no need to check anything and the device charges automatically without emitting any electromagnetic waves. Conversely, when no devices are detected, the pad automatically goes on standby,” explains Timothée Le Quesne.

But what happens if another device is placed on the pad? “The surface is naturally inert. The cleverness of the system lies in the fact that it can detect the object and identify whether it is a battery to be charged or not. Even if you spill water on it, it won’t have any effect. It is water resistant and protected against infiltration,” explains the young entrepreneur. No electric current is transmitted to your cup of coffee placed absentmindedly on the surface either. Although the system uses conductive charging, it does not emit any heat when it is running. Heat is dispersed across the surface like in a radiator, even if several devices are charging at once at the same speed as when plugged into the mains. Charging a device becomes so practical you could easily forget your phone lying on the surface. But this is not a problem, because the system goes back into standby once the device is fully charged.

Hotels soon to be using this technology

“We most need electricity when we’re away. We often have low battery in airports, cafes etc… This is a B2B market that we aim to invest in,” explains Timothée Le Quesne. For the moment, Energysquare is addressing the hospitality sector with tests to be carried out in France over the coming weeks. The principle is simple: the pad is installed on a bedside table and the stickers are provided at reception.

But the start-up aims to go even further. Why place the pad on a surface when it could be directly integrated into the furniture it sits on? “Our only limitation is preserving the metal surface of the pad to allow it to charge. We can still add a bit of style though by giving it a brushed effect, for example. Working with furniture manufacturers offers us good prospects. We can already imagine surfaces in meeting rooms covered with our device! We can also give the pad any form we like, with larger or smaller sections according to the device it is designed for,” Timothée Le Quesne continues.

With such a universal system, we can reasonably ask what the start-up’s aims are for the international market. “In January we will be participating in CES, an electronics show in the USA, where we will have a stand to display and demonstrate our technology.” This welcome overseas publicity is hardly a surprise since the start-up saw positive interest in its technology during a fundraiser on Kickstarter in June 2016, with 1/3 of purchasers in Asia and 1/3 in America. “As soon as we have finished validating our tests in hotels in France, we will turn to the foreign market,” affirms Timothée Le Quesne. But don’t worry, Energysquare hasn’t forgotten private individuals, and will launch the online sale of its technology in 2017.

Smart objects: a promising market to conquer

“One of our aims is to become a standard charging device for all smart objects,” admits Timothée Le Quesne. This is a promising future market, since 20 billion smart objects are forecast to be manufactured between now and 2020… All the more technology for us to spend time plugging in to charge! The start-up has already carried out tests with positive results on smart speakers and e-cigarettes, but the shape of certain objects, such as smart headphones, prevents the Energysquare system adapting to them. “For some devices, the electrodes will have to be integrated directly by the manufacturer.”

Nevertheless, there is one item that we use every day which would definitely benefit from this sort of charging system: laptops! The main difficulty, unlike other objects, is the power that needs to be generated by the system. “We need to change certain components to obtain more power through the pad and adapt it to laptops. It is something that is scheduled for 2017,” affirms Timothée Le Quesne. This is the first obstacle to overcome, especially since, when we asked the young entrepreneur what the future for Energysquare looked like 5 to 10 years from now, he replied: “we would like to be able to not only charge devices, but also power household appliances directly. We want to get rid of electric cables and replace them with surfaces that will power your kettle and charge your phone.”

Electroencephalogram: a brain imaging technique that is efficient but limited in terms of spatial resolution.

Technology that decrypts the way our brain works

Different techniques are used to study of the functioning of our brain, including electroencephalography, magnetoencephalography, functional MRI and spectroscopy. The signals are processed and interpreted to analyze the cognitive processes in question. EEG and MRI are the two most commonly used techniques in cognitive science. Their performances offer hope and but also concern. What is the current state of affairs of brain function analysis and what are its limits?

 

Nesma Houmani is a specialist in electroencephalography (EEG) signal analysis and processing at Télécom SudParis. Neuron activity in the brain generates electrical changes which can be detected on the scalp. These are recorded using a cap fitted with strategically-placed electrodes. The advantages of EEG are that it is not costly, easily accessible and noninvasive for the subjects being studied. However, it generates a complex signal composed of oscillations associated with new baseline brain activity when the patient is awake and at rest, punctual signals linked to activations generated by the test and variable background noise caused, notably, by involuntary movements by the subject.

The level of noise depends, among other things, on the type of electrodes used, whether dry or with gel. While the latter reduces the detection of signals not emitted by brain activity, they take longer to place, may cause allergic reactions and require the patient to thoroughly wash with shampoo after the examination, making it more complicated to carry out these tests outside hospitals. Dry electrodes are being introduced in hospitals, but the signals recorded have a high level of noise.

The researcher at Télécom SudParis uses machine learning and artificial intelligence algorithms to extract EEG markers. “I use information theory combined with statistical learning methods to process EEG time series of a few milliseconds.” Information theory supposes that signals with higher entropy contain more information. In other words, when the probability of an event occurring is low, the signal contains more information and is therefore more likely to be relevant. Nesma Houmani’s work allows the removal of parasite signals from the trace and a more accurate interpretation of the EEG data recorded.

A study published in 2015 showed that this technique allowed better definition of the EEG signal in the detection of Alzheimer’s disease. Statistical modeling allows consideration of the interaction between the different areas of the brain over time. As part of her research on visual attention, Nesma Houmani uses EEG combined with an eye tracking device to determine how a subject engages in and withdraws from a task: “The participants must observe images on a screen and carry out different actions according to the image shown. A camera is used to identify the point of gaze, allowing us to reconstitute eye movements,” she explains. Other teams use EEG for emotional state discrimination or for understanding decision-making mechanisms.

EEG provides useful data because it has a temporal resolution of a few milliseconds. It is often used in applications for brain-machine interfaces, allowing a person’s brain activity to be observed in real time with just a few seconds’ delay. “However, EEG is limited in terms of spatial resolution,” explains Nesma Houmani. This is because the electrodes are, in a sense, placed on the scalp in two dimensions, whereas the folds in the cortex are three-dimensional and activity may come from areas that are further below the surface. In addition, each electrode measures the sum of synchronous activity for a group of neurons.

The most popular tool of the moment: fMRI

Conversely, functional MRI (fMRI) has excellent spatial resolution but poor temporal resolution. It has been used a lot in recent scientific studies but is costly and access is limited by the number of devices available. Moreover, the level of noise it produces when in operation and the subject’s position lying down in a tube can be stressful for participants. Brain activity is reconstituted in real time by detecting a magnetic signal linked to the amount of blood transferred by micro-vessels at a given moment, which is visualized over 3D anatomical planes. Although activations can be accurately situated, hemodynamic variations occur a few seconds after the stimulus, which explains why the temporal resolution is lower than that of EEG.

fMRI produces section images of the brain with good spatial resolution but poor temporal resolution.

fMRI produces section images of the brain with good spatial resolution but poor temporal resolution.

 

Nicolas Farrugia has carried out several studies with fMRI and music. He is currently working on applications for machine learning and artificial intelligence in neuroscience at IMT Atlantique. “Two main paradigms are being studied in neuroscience: coding and decoding. The first aims to predict brain activity triggered by a stimulus, while the second aims to identify the stimulus from the activity,” the researcher explains. A study published in 2017 showed the possibilities of fMRI associated with artificial intelligence in decoding. Researchers asked subjects to watch videos in an MRI scanner for several hours. A model was then developed using machine learning, which was able to reconstruct a low-definition image of what the participant saw based on the signals recorded in their visual cortex. fMRI is a particularly interesting technique for studying cognitive mechanisms, and many researchers consider it the key to understanding the human brain, but it nevertheless has its limits.

Reproducibility problems

Research protocol changed recently. Nicolas Farrugia explains: “The majority of publications in cognitive neuroscience use simple statistical models based on functional MRI contrasts by subtracting the activations recorded in the brain for two experimental conditions A and B, such as reading versus rest.” But several problems have led researchers to modify this approach. “Neuroscience is facing a major reproducibility challenge,” admits Nicolas Farrugia. Different limitations have been identified in publications, such as a small workforce, a high level of noise and a separate analysis for each part of the brain, not to mention any interactions or the relative intensity of activation in each area.

These reproducibility problems are leading researchers to change methods, from an inference technique in which all available data is used to obtain a model that cannot be generalized, to a prediction technique in which the model learns from part of the data and is then tested on the rest.” This approach, which is the basis for machine learning, allows the model’s relevance to be checked in comparison with the actual reality. “Thanks to artificial intelligence, we are seeing the development of computational calculation methods which were not possible with standard statistics. In time, this will allow researchers to predict what type of image or what piece of music the person is thinking of based on their brain activity.

Unfortunately, there are also reproducibility problems in signal processing with machine learning. The technique, which is based on artificial neural networks, is currently the most popular because it is very effective in multiple applications, but it requires adjusting hundreds of thousands of parameters using optimization methods. Researchers tend to adjust the parameters of the developed model when they evaluate it and repeat it on the same data, thus distorting the generalization of results. The use of machine learning also leads to another problem for signal detection and analysis: the ability to interpret the results. Knowledge of deep learning mechanisms is currently very limited and is a field of research in its own right, so our understanding of how human neurons function could in fact come from our understanding of how deep artificial neurons function. A strange sort of mise en abyme!

 

Article written by Sarah Balfagon, for I’MTech.

 

More on this topic:

The “Miharu Takizakura”, a weeping cherry tree over a thousand years old. The tree is on a soil contaminated by the Fukushima incident.

The cherry trees of Fukushima

Written by Franck Guarnieri, Aurélien Portelli, et Sébastien Travadel, Mines ParisTech.
The original version has been published on The Conversation.

[divider style=”normal” top=”20″ bottom=”20″]

[dropcap]I[/dropcap]t’s 2019 and for many, the Fukushima Daiichi nuclear disaster has become a distant memory. In the West, the event is considered to be over. Safety standards have been audited and concerns about the sector’s security and viability officially addressed.

Still, the date remains traumatic. It reminds us of our fragility in the face of overpowering natural forces, and those that we unleashed ourselves. A vast area of Japan was contaminated, tens of thousands of people were forced from their homes, businesses closed. The country’s nuclear power plants were temporarily shut down and fossil fuel consumption rose sharply to compensate.

There’s also much work that remains to be done. Dismantling and decontamination will take several decades and many unprecedented challenges remain. Reactors are still being cooled, spent fuel must be removed, radioactive water has to be treated. Radioactivity measurements on site cannot be ignored, and are a cause for concern for the more than 6,000 people working there. Still, the risk of radiation may be secondary given the ongoing risk of earthquakes and tsunamis.

Surprisingly, for many Japanese the disaster has a different connotation – it’s seen as having launched a renaissance.

Rebuilding the world

Our research team became aware of the need to re-evaluate our ideas about the Fukushima Daiichi nuclear disaster during a visit in March 2017. As we toured the site, we discovered a hive of activity – a place where a new relationship between man, nature and technology is being built. The environment is completely artificial: all vegetation has been eradicated and the neighbouring hills are covered by concrete. Yet, at the heart of this otherworldly landscape there are cherry trees – and they were in full bloom. Curious, we asked the engineer who accompanied us why they were still there. His answer was intriguing. The trees will not be removed, even though they block access routes for decontamination equipment.

Cherry trees have many meanings for the Japanese. Since ancient times, they have been associated with 気 (ki, or life force) and, in some sense, reflect the Japanese idea of time as never-ending reinvention. A striking example is that in the West we exhibit objects in museums, permanently excluding them from everyday life. This is happen in Japan. For example, the treasures making up the Shōsō-in collection are only exhibited once each year and the purpose is not to represent the past, but to show that these objects are still in the present. Another illustration is the Museum Meiji-mura, where more than 60 buildings from the Meiji era (1868-1912) have been relocated. The idea of ongoing reinvention is manifested in buildings that have retained their original function: visitors can send a postcard from a 1909 post office or ride on an 1897 steam locomotive.

We can better understand the relationship between this perception of time and events at Fukushima Daiichi by revisiting the origins of the country’s nuclear-power industry. The first facility was a British Magnox reactor, operational from 1966 to 1998. In 1971, construction of the country’s first boiling-water reactor was supervised by the US firm General Electric. These examples illustrate that for Japan, nuclear power is a technology that comes from elsewhere.

When the Tōhoku earthquake and tsunami struck in 2011, Japan’s inability to cope with the unfolding events at the Fukushima Daiichi nuclear power plant stunned the world. Eight years having passed, for the Japanese it is the nation’s soul that must be resuscitated. Not by the rehabilitation of a defective “foreign” object, but by the creation of new, “made in Japan” technologies. Such symbolism not only refers to the work being carried out by the plant’s operator, the Tokyo Electric Power Company (TEPCO), but also reflects Japanese society.

A photo selected for the NHK Fukushima cherry-tree competition. NHK/MCJP

The miraculous cherry tree

Since 2012, the Japanese public media organisation NHK has organised the “Fukushima cherry tree” photo competition to symbolise national reconstruction. Yumiko Nishimoto’s “Sakura Project” has the same ambition. Before the Fukushima disaster, Nishimoto lived in the nearby town of Naraha. Her family was evacuated and she was only able to return in 2013. Once home, she launched a national appeal for donations to plant 20,000 cherry trees along the prefecture’s 200-kilometre coastline. The aim of the 10-year project is simply to restore hope among the population and the “determination to create a community”, Nishimoto has said. The idea captured the country’s imagination and approximately a thousand volunteers turned up to plant the first trees.

More recently, the “Miharu Takizakura” cherry tree has made headlines. More than 1,000 years old and growing in land contaminated by the accident, its presence is seen as a miracle and it attracts tens of thousands of visitors. The same sentiment is embodied in the Olympic torch relay, which will start from Fukushima on March 26, 2020, for a 121-day trip around Japanese prefectures during the cherry-blossom season.

Fukushima, the flipside of Chernobyl?

This distinctly Japanese perception of Fukushima contrasts with its interpretation by the West, and suggests that we re-examine the links between Fukushima and Chernobyl. Many saw the Fukushima disaster as Chernobyl’s twin – another example of the radioactive “evil”, a product of the industrial hubris that had dug the grave of the Soviet Union.

In 1986 the fatally damaged Chernobyl reactor was encased in a sarcophagus and the surrounding area declared a no-go zone. Intended as a temporary structure, in 2017 the sarcophagus was in turn covered by the “New Safe Confinement”, a monumental structure designed to keep the site safe for 100 years. This coffin in a desert continues to terrify a population that is regularly told that it marks the dawn of a new era for safety.

Two IAEA agents examine work Unit 4 of the Fukushima Daiichi Nuclear Power Station (April 17, 2013).Greg Webb/IAEA, CC BY

Two IAEA agents examine work Unit 4 of the Fukushima Daiichi Nuclear Power Station (April 17, 2013). Greg Webb/IAEA, CC BY

At the institutional level, the International Atomic Energy Agency (IAEA) responded to Chernobyl with the concept of “safety culture”. The idea was to resolve, once and for all, the issue of nuclear power plant safety. Here, the Fukushima accident had little impact: Infrastructure was damaged, the lessons learned were incorporated into safety standards, and resolutions were adopted to bring closure. In the end, the disaster was unremarkable – no more than a detour from standard procedures that had been established following Chernobyl. For the IAEA, the case is closed. The same applies to the nuclear sector as a whole, where business has resumed more or less as usual.

To some extent, Japan has fallen in line with these ideas. The country is improving compliance with international regulations and increased its contribution to the IAEA’s work on earthquake response. But this Western idea of linear time is at odds with the country’s own understanding of the disaster’s framework. For many Japanese, events are still unfolding.

While Chernobyl accelerated the collapse of the Soviet Union, Fukushima Daiichi has become a showcase for the Japanese government. The idea of ongoing reinvention extends to the entire region, through a policy of repopulation. Although highly controversial, this approach stands in stark contrast to Chernobyl, which remains isolated and abandoned.

Other differences are seen in the reasons given for the causes of the accident: The IAEA concluded that the event was due to a lack of safety culture – in other words, organisational failings led to a series of unavoidable effects that could have been predicted – while Japanese scientists either drew an analogy with events that occurred during the Second World War, or attributed the accident to the characteristics of the Japanese people.

Before one dismisses such conclusions as irrational, it’s essential to think again about the meaning of the Fukushima disaster.

The Conversation

IRON-MEN: augmented reality for operators in the Industry of the Future

I’MTech is dedicating a series of success stories to research partnerships supported by the Télécom & Société Numérique (TSN) Carnot Institute, which the IMT schools are a part of.

[divider style=”normal” top=”20″ bottom=”20″]

The Industry of the Future cannot happen without humans at the heart of production systems. To help operators adapt to the fast development of industrial processes and client demands, elm.leblanc, IMT and Adecam Industries have joined forces in the framework of the IRON-MEN project. The aim is to develop an augmented reality solution for human operators in industry.

 

Many production sites use manual processes. Humans are capable of a level of intelligence and flexibility that is still unattainable by industrial robots, an ability that remains essential for the French industrial fabric to satisfy increasingly specific, demanding and unpredictable customer and user demands.

Despite alarmist warnings about replacement by technology, humans must remain central to industrial processes for the time being. To enhance the ability of human operators, IMT, elm.leblanc and Adecam Industries have joined forces in the framework of the IRON-MEN project. The consortium will develop an augmented reality solution for production operators over a period of 3 years.

The augmented reality technology will be designed to help companies develop flexibility, efficiency and quality in production, as well as strengthen communication among teams and collaborative work. The solution developed by the IRON-MEN project will support users by guiding and assisting them in their daily tasks to allow them to increase their versatility and ability to adapt.

The success of such an intrusive piece of technology as an augmented reality headset depends on the user’s physical and psychological ability to accept it. This is a challenge that lies at the very heart of the IRON-MEN project, and will guide the development of the technology.

The aim of the solution is to propose an industrial and job-specific response that meets specific needs to efficiently assist users as they carry out manual tasks. It is based on an original approach that combines digital transformation tools and respect for the individual in production plants. It must be quickly adaptable to problems in different sectors that show similar requirements.

IMT will contribute its research capacity to support elm.leblanc in introducing this augmented reality technology within its industrial organization. Immersion, which specializes in augmented reality experiences, will develop the interactive software interface to be used by the operators. The solution’s level of adaptability in an industrial environment will be tested at the elm.leblanc production sites at Drancy and Saint-Thégonnec as well as through the partnership with Adecam Industrie. IRON-MEN is supported by the French General Directorate for Enterprises in the framework of the “Grands défis du numérique” projects.

When organizations respond to cyberattacks

Cyberattacks are a growing reality that organizations have to face up to. In the framework of the German-French Academy for the Industry of the Future, researchers at IMT and Technische Universität München (TUM) show that there are solutions to this virtual threat. In particular, the ASSET project is studying responses to attacks that target communication between smart objects and affect the integrity of computer systems. Frédéric Cuppens, a researcher in cybersecurity on this project at IMT Atlantique and coordinator of the Cybersecurity of critical infrastructures chair, explains the state-of-the-art defenses to respond to these attacks.

 

Cybersecurity is an increasingly pressing subject for a number of organizations. Are all organizations concerned?

Fréderic Cuppens: The number of smart objects is growing exponentially, including in different organizations. Hospitals, industrial systems, services and transport networks are examples of places where the Internet of Things plays a major role and which are becoming increasingly vulnerable in terms of cybersecurity. We have already seen attacks on smart cars, pacemakers, smart meters etc. All organizations are concerned. To take the case of industry alone, since it is one of our fields of interest at IMT Atlantique, these new vulnerabilities affect production chains and water treatment just as much as agricultural processes and power generation.

What attacks are most often carried out against this type of target?

FC: We have classified the attacks carried out against organizations in order to study the threats. There are lots of attacks on the integrity of computer systems, affecting their ability to function correctly. This is what happens when, for example, an attacker takes control of a temperature sensor to make it show an incorrect value, leading to an emergency shutdown. Then there are also lots of attacks against the availability of systems, which consist in preventing access to services or data exchange. This is the case when an attacker interferes with communication between smart objects.

Are there responses to these two types of attack?

FC: Yes, we are working on measures to put in place against these types of attack. Before going into detail, we need to understand that cybersecurity is composed of three aspects: protection, which consists for example in filtering communication or controlling access to prevent attack; defense, which detects when an attack is being made and provides a response to stop it; and lastly resilience which allows systems to continue operating even during an attack. The research we are carrying out against attacks targeting availability or integrity include all three components, with special focus on resilience.

Confronted with attacks against the availability of systems, how do you guarantee this resilience?

FC: To interfere with communication, all you need is a jamming device. They are prohibited in France, but it is not hard to get hold on one on the internet. A jammer interferes with communication on certain frequencies only, depending on the type of jamming device used. Some are associated with Bluetooth frequencies, others with Wi-Fi networks or GPS frequencies. Our approach to fighting against jammers is based on direct-sequence spread spectrum. The signal is “buried in noise” and is therefore difficult to detect with a spectrum analyzer.

Does that allow you to effectively block an attack by interference?

FC: This is a real process of resilience. We assume that, to interfere with the signal, the attacker has to find the frequency the two objects are communicating on, and we want to ensure this does not jeopardize communication. By the time the attacker has found the frequency and launched the attack, the spread code has been updated. This approach is what we call “moving target defense”, in which the target of the attack — the sequence of propagation— is regularly updated. It is very difficult for an attacker to complete their attack before the target is updated.

Do you use the same approach to fight against attacks on integrity?

FC: Sort of, but the problem is not the same. In this case, we have an attacker who is able to integrate data in a smart way so that the intrusion is not detected. Take, for example, a tank being filled. The attacker corrupts the sensor so that it tells the system that the tank is already full. He will thus be able to stop the pumps in the treatment station or distillery. We assume that the attacker knows the system very well, which is entirely possible. The attacks on Iranian centrifuges for uranium enrichment showed that an attacker can collect highly sensitive data on the functioning of an infrastructure.

How do you fight against an attacker who is able to go completely unnoticed?

FC: State-of-the-art security systems propose to introduce physical redundancy. Instead of having one sensor for temperature or water level, we have several sensors of different types. This means the attacker has to attack several targets at once. Our research proposes to go even further by introducing virtual redundancy. There would be an auxiliary system that simulates the expected functioning of the machines or structures. If the data sent by the physical sensors differs from the data from the virtual model, then we know something abnormal is happening. This is the principal of a digital twin that provides a reference value in real time. It is similar to the idea of moving target defense, but with an independent virtual target whose behavior varies dynamically.

This work is being carried out in partnership with Technische Universität München (TUM) in the framework of the ASSET project by the German-French Academy for the Industry of the Future. What does this partnership contribute from a scientific point of view?

FC: IMT Atlantique and TUM each bring complementary skills. TUM is more focused on the physical layers and IMT Atlantique focuses more on the communication and service layers. Mines Saint-Étienne is also contributing and collaborating with TUM on attacks on physical components. They are working together on laser attacks on the integrity of components. Each party offers skills that the other does not necessarily have. This complementarity allows solutions to be designed to fight against cyberattacks at different levels and from different points of view. It is crucial in a context where computer systems are becoming more complicated: countermeasures must follow this level of complexity. Dialogue between researchers with different skills stimulates the quality of the protection we are developing.

 

[divider style=”normal” top=”20″ bottom=”20″]

Renewal of the Cybersecurity and critical infrastructures chair (Cyber CNI)

Launched in January 2016 and after 3 years of operation, the Chair for the cybersecurity of critical infrastructures (Cyber CNI) is being renewed for another 3 years thanks to the commitment of its academic and industrial partners. The IMT chair led by IMT Atlantique benefits from partnerships with Télécom ParisTech and Télécom SudParis and support from the Brittany region – a region at the forefront of cutting-edge cybersecurity technology – in the framework of the Cyber Center of Excellence. In the context of a sponsor partnership led by Fondation Mines-Télécom, five industrial partners have committed to this new period: AIRBUS, AMOSSYS, BNP Paribas, EDF and Nokia Bell Labs. The official signing to renew the Chair took place at FIC (International Cybersecurity Forum) in Lille on 22 January 2019.

Read the news on I’MTech: Cyber CNI chair renewed for 3 years

[divider style=”normal” top=”20″ bottom=”20″]

The installation of a data center in the heart of a city, like this one belonging to Interxion in La Plaine Commune in Île-de-France, rarely goes unnoticed.

Data centers: when digital technology transforms a city

As the tangible part of the digital world, data centers are flourishing on the outskirts of cities. They are promoted by elected representatives, sometimes contested by locals, and are not yet well-regulated, raising new social, legal and technical issues. Here is an overview of the challenges this infrastructure poses for cities, with Clément Marquet, doctoral student in sociology at Télécom ParisTech, and Jean-Marc Menaud, researcher specialized in Green IT at IMT Atlantique.

 

On a global scale, information technology contributes to almost 2% of global greenhouse gas emissions, which is as much as civil aviation. In addition, “The digital industry consumes 10% of the world’s energy production” explains Clément Marquet, sociology researcher at Télécom Paristech, who is studying this hidden side of the digital world. The energy consumption required for infrastructure to run smoothly, under the guise of ensuring reliability and maintaining a certain level of service quality, is of particular concern.

With the demand for real-time data, the storage and processing of these data must be carried out where they are produced. This explains why data centers have been popping up throughout the country over the past few years. But not just anywhere. There are close to 150 throughout France. “Over a third of this infrastructure is concentrated in Ile-de-France, in Plaine Commune – this is a record in Europe. It ends up transforming urban areas, and not without sparking reactions from locals,” the researcher says.

Plaine Commune, a European Data Valley

In his work, Clément Marquet questions why these data centers are integrated into urban areas. He highlights their “furtive” architecture, as they are usually built “in new or refitted warehouses, without any clues or signage about the activity inside”. He also looks at the low amount of interest from politicians and local representatives, partly due to their lack of knowledge on the subject. He takes Rue Rateau in La Courneuve as an example. On one side of this residential street, just a few meters from the first houses, a brand-new data center was inaugurated at the end of 2012 by Interxion. The opening of this installation did not run smoothly, as the sociologist explains:

“These 1,500 to 10,000 m2 spaces have many consequences for the surrounding urban area. They are a burden on energy distribution networks, but that is not all. The air conditioning units required to keep them cool create noise pollution. Locals also highlight the risk of explosion due to the 568,000 liters of fuel stored on the roof to power the backup generator, and the fact that energy is not recycled in the city heating network. Across the Plaine Commune agglomeration, there are also concerns regarding the low number of jobs created locally compared with the property occupied. It is no longer just virtual.”

Because these data centers have such high energy needs, the debate in Plaine Commune has centered on the risk of virtual saturation in electricity. Data centers store more energy than they really consume, in order to deal with any shortages. This stored electricity cannot be put to other uses. And so, while La Courneuve is home to almost 27,000 inhabitants, the data center requires the equivalent of a city of 50,000 people. The sociologist argues that there was no consultation of the inhabitants when this building was installed. They ended up taking legal action against the installation. “This raises the question of the viability of these infrastructures in the urban environment. They are invisible and yet invasive”.

Environmentally friendly integration possible

One of the avenues being explored to make these data centers more virtuous and more acceptable is to integrate environmentally friendly characteristics, hooking them up to city heating networks. Data centers could become energy producers, rather than just consumers. In theory, this would make it possible to heat pools or houses. However, it is not an easy operation. In 2015 in La Courneuve, Interxion had announced that it would connect a forthcoming 20,000 m² center. They did not follow through, breaking their promise of a change in their practice. Connecting to the city heating network requires major, complicated coordination between all parties. The project was faced with reluctance by the hosts to communicate on their consumption. Also, hosts do not always have the tools required to recycle heat.

Another possibility is to optimize the energy performance of data centers. Many Green IT researchers are working on environmentally responsible digital technology. Jean-Marc Menaud, coordinator of the collaborative project EPOC (Energy Proportional and Opportunistic Computing systems) and director of the CPER SeDuCe project (Sustainable Data Center), is one of these researchers. He is working on the anticipation of consumption, or predicting the energy needs of an application, combined with anticipating electricity production. “Energy consumption by digital technologies is based on three foundations: one third is due to the non-stop operation of data centers, one third is due to the Internet network itself” he explains, and the last third comes down to user terminals and smart objects.

Read on I’MTech: Data centers, taking up the energy challenge

Since summer 2018, the IMT Atlantique campus has hosted a new type of data center, one devoted to research, and available for use by the scientific community. “The main objective of SeDuce is to reduce the energy consumed by the air conditioning system. Because in energy, nothing goes to waste, everything can be transformed. If we want the servers to run well, we have to evacuate the heat, which is at around 30-35°C. Air conditioning is therefore vital” he continues. “And in the majority of cases, air conditioning is colossal: for 100 watts required to run a server, another 100 are used to cool it down”.

How does SeDuCe work? The data center, with a 1,000-core or 50-server capacity, is full of sensors and probes closely monitoring temperatures. The servers are isolated from the room in airtight racks. This airtight confinement makes it possible to optimize cooling costs tenfold: for 100 watts used by the servers, only 10 watts are required to cool them. “Soon, SeDuCe will be powered during the daytime by solar panels. Another of our goals is to get users to adapt the way they work according to the amount of energy available. A solution that can absolutely be applied to even the most impressive data centers.” Proof that energy transition is possible via clouds too.

 

Article written by Anne-Sophie Boutaud, for I’MTech.