professional

When healthcare professionals form communities through digital technology

Digital technology is shaking up the healthcare world. Among its other uses, it can help break isolation and facilitate online interactions in both the private and professional spheres. Can these virtual interactions help form a collective structure and community for individuals whose occupations involve isolation and distance from their peers? Nicolas Jullien, a researcher in economics at IMT Atlantique, looks at two professional groups, non-hospital doctors and home care workers, to outline the trends of these new digital connections between practices.

 

On the Twitter social network, doctors interact using a bot with the hashtag #DocTocToc. These interactions include all kinds of professional questions, requests for details about a diagnosis, advice or medical trivia. In existence since 2012, this channel for mutual assistance appears to be effective. With rapid interactions and reactive responses, the messages pour in minutes after a question is asked. None of this comes as a surprise for researcher Nicolas Jullien: “At a time when we hear that digital technology is causing structures to fall apart, to what extent does it also contribute to the emergence and organization of new communities?

Doctors–and healthcare professionals in general–are increasingly connected and have a greater voice. On Twitter, some star doctors have thousands of followers: 28,900 for Baptiste Beaulieu, a family doctor, novelist and formerly a radio commentator on France Inter; 25,200 for medical intern and cartoonist @ViedeCarabin; and nearly 9,900 for Jean-Jacques Fraslin, family doctor and author of an op-ed piece denouncing alternative medicine. Until now, few studies had been conducted on these new online communities of practice. Under what conditions do they emerge? What are the constraints involved in their development? What challenges do they face?

New forms of collective action

For several years, Nicolas Jullien and his colleagues have been studying the structure and development of online communities. These forms of collective action, which exist in the popular imagination as the prisoner’s dilemma, question the way action is taken: the classic dilemma involves two suspects arrested by the police and isolated for interrogation. They are then given three possible outcomes: the one can denounce the other and be released. In this case, the accomplice receives the maximum sentence. They can plead guilty or denounce each other and receive a more lenient sentence, or they can both deny their wrongdoings and receive the minimum sentence. Although it would be to the individuals’ advantage to take collective action–and despite an awareness of this fact–it is not always done. In the absence of dialogue, individuals seek to maximize their individual interests. Is it in fact possible for digital platforms to facilitate the coordination of these individual interests? Can they be used to create collective projects for sharing, knowledge and mutual assistance, particularly in the professional sphere, with projects like Wikipedia and open source software? These professional social networks represent a new field of exploration for researchers.

Read on I’MTech: Digital commons:  individual interests to serve a community

We decided to focus on two professional groups, doctors and home care workers, which are both involved in health and service relationships, but are diametrically opposed in terms of qualifications. The COAGUL project, funded by the Brittany Region, analyzes the relationships established in each of these professional groups through online interactions. We are conducting these studies with Christèle Dondeyne’s team at Université de Bretagne Occidentale”, Nicolas Jullien explains. These interactions can be linked to technical problems and uncertainties (diagnoses, ways of performing a procedure), ethical and legal issues (especially related to terms and conditions of employment contracts and relations with the health insurance system), employment or working conditions (amount of time spent providing care at a home, discussions on home care worker tasks that go beyond basic health procedures). Isolation therefore favors the emergence of communities of practice. “Digital technology offers access to tools, platforms and means of coordinating work and the activities of professionals. These communities develop autonomously and spontaneously,” the researchers add.

So how can a profession be carried out online? The researchers are currently conducting work that is exploratory and necessarily qualitative. For the first phase of their study, they identified 20 stakeholders mobilized on the internet in order to determine the ties of cooperation and solidarity that are being created on social networks, forums and dedicated websites by collecting their interactions.  Where do people go? Why do they stay there? “We have observed that usage practices vary according to profession. While home care workers interact more on Facebook, with several groups of thousands of people and a dozen messages per day, family doctors prefer to interact on Twitter and form more personal networks,” Nicolas Jullien explains. “This qualitative method allows us to understand what lies behind the views people share.  Because in these shifting groups, tensions arise related to position and each individual’s experiences,” the researcher explains. For the second phase of the study, researchers will conduct a series of interviews with local professional groups. The long-term objective is to compare various motivations for action and means of interaction from the two different professions.

On a wider scale, behind these digital issues, researchers are seeking to analyze the capacity of these groups to collectively produce knowledge. “In the past, we worked on the Wikipedia model, free online encyclopedia software that brings together nearly 400,000 contributors per month. This collective action is the most extensive that has ever been accomplished. It has seen massive success–that is unexpected and lasting–in producing knowledge online,” Nicolas Jullien explains.

But although contributors participate on a volunteer basis, the rules for contributions are becoming increasingly strict, for example through moderation or based on topics that have already been created. “The contribution is what is regulated in these communities, not the knowledge,” the researcher adds. “Verifying the contribution is what takes time and, for communities of practice, responding. An increase in participants and messages brings with it a greater need to limit noise, i.e. irrelevant comments.” With intellectual challenges, access to peers, and the ability to have contributions viewed by others, the digital routine of daily professional acts has the potential to shake up communities of practice and support the development of new forms of professional solidarity.

Article written (in French) by Anne-Sophie Boutaud, for I’MTech.

good in tech

Good in Tech: a chair to put responsibility and ethics into innovation

On September 12, the Good in Tech chair was launched with the aim of making digital innovations more responsible and ethical. The chair is supported by the Institut Mines-Télécom Business School, the School of Management and Innovation at Sciences Po, and the Fondation du Risque, in partnership with Télécom Paris and Télécom SudParis. This means that the Good in Tech chair combines human and social sciences, computer sciences and engineering. It aims to shed light on corporate governance decisions regarding digital innovation, and help businesses embrace new values for innovation. Christine Balagué, a researcher at Institut Mines-Télécom Business School and co-holder of the Good in Tech chair, tells us the importance of this initiative, as well as the research challenges and problems that companies face.

 

Why did you want to create a research chair on the ethics and responsibility of digital technologies?

Christine Balagué: The chair brings together complementary research skills. We are also establishing a multidisciplinary approach, by including both hard sciences and human and social sciences. Unlike existing research initiatives, which include a lot of hard sciences and little in the way of human sciences, the Good in Tech chair has the advantage of having a strong human science perspective. This means that the chair can address issues surrounding corporate responsibility, user behavior towards responsible technologies, modes of governance or possible futures.

Responsible digital innovation is one of our areas of study.  What are you currently working on in this area of study?

CB: Today, most companies have a corporate social responsibility or CSR policy. In most of these cases, CSR does not include many indicators on digital innovation, which is ironic since artificial intelligence, connected objects or big data are being developed in all sectors. Our work on this issue will therefore focus on developing CSR indicators for responsible digital innovation and proposing a measurement method for them.

Read on I’MTech: Innovation: to be or not to be responsible?

You’ve talked about measuring responsible innovation afterward, but can research also help to reflect on the responsibility of technologies beforehand, from the design stage?

CB: Of course, and this is the aim of a second area of study for the chair regarding responsible technologies “by-design”. We know that digital technologies, especially modern artificial intelligence, raise ethical issues: the algorithms are often non-transparent, difficult to explain, potentially discriminatory, and biased. For example, we know that in the USA, social media treats users differently depending on their political views or the color of their skin. Another example would be facial recognition technologies and recruiting algorithms, which are not transparent. Companies that develop artificial intelligence or data handling tools don’t always consider these issues. They end up with products that they market or use that have a major impact on consumers. We are therefore doing research into how we can make technology more transparent, explicable and less discriminatory from the beginning of the design process.

It’s also important for a research chair to involve companies in the discussion. Who are your industrial partners and what do they bring to your work?

CB: At the moment we’re working with five partners: Afnor, CGI, Danone, FaberNovel and Sycomore. These companies are all interested in digital responsibility issues. They help our work by opening their data, providing us with use cases, etc. They also allow us to understand the economic problems that companies face.

Do you plan on making recommendations to companies or public authorities?

CB: The main aim of the chair is to get articles published in the best scientific journals and to encourage research on the chair’s four areas of study. We are also considering publishing policy papers, which are scientific articles that aim to inform political and industrial choices. As well as these articles, one of the chair’s areas of study is dedicated to planning for the future.  We are going to organize conferences with students from Sciences Po and the IMT schools involved, which will aim to get students thinking about future prospects for responsible digital technologies. Similarly, conferences will be held for the general public in order to start the debate on possible futures. For example, we will propose scenarios such as: “In the future, these will be every-day technologies. How will they impact healthcare or how consumers buy things online?” The idea is to imagine the future in partnership with users of technology whilst involving people from all walks of life.

By helping people to make an informed decision, do you want to help define a framework for digital innovation governance?

CB: We are studying every possible mode of governance – which is the fourth and final area of study of the chair. This is so we can understand which level is most appropriate, whether it’s at company, national or European level.  In particular, we would like to study the importance of governance that is directly integrated into the company. For example, we want to see whether developing responsible technologies “by-design” would be more effective than international regulation. The aim is to integrate mechanisms of governance directly into responsible companies’ behavior, knowing that responsible digital innovation means that consumers would be more likely to buy from that company.

Are certain businesses reluctant to comply with this emerging trend for “tech-for-good”? And does this mean they are reluctant to comply with the notions of responsibility and ethics?

CB: The chair is working to defend a European vision of digital innovation. China is less interested in these issues, and major American institutions are working on these problems. However, the GDPR has shown that Europe can make regulations change; our regulations surrounding personal data have made an impact on people working in Silicon Valley. Businesses who are reluctant to comply with these regulations must understand that the more responsible technology is, the more they are accepted by consumers. For example, the market for health-related connected objects is developing slower than expected in Europe, due to consumers reluctance to use collected data. However, to ensure that it is accepted by companies, we have to make sure responsibility does not slow down innovation. Coupling companies’ digital innovation with consumers’ needs will undoubtedly be one of the biggest challenges of the chair.

competition, access to data

The new competition issues raised by access to data in the digital economy

Patrick Waelbroeck, Télécom Paris – Institut Mines-Télécom and Antoine Dubus, Télécom Paris – Institut Mines-Télécom

[divider style=”normal” top=”20″ bottom=”20″]

[dropcap]I[/dropcap]n the digital economy, data is king. While recent problems with data theft and loss have made headlines following the introduction of the General Data Protection Regulation (GDPR) in May 2018 , the link between access to data and market competition is still in the observation phase. We will consider some recent cases.

In 2014, the European Commission investigated the consequences of Facebook’s purchase of WhatsApp. By merging, the two companies could combine their databases and therefore increase the amount of information they have about their users. In 2017, when it was found that Facebook had deliberately lied about the possibility of associating WhatsApp users’ phone numbers with their Facebook profiles, the Commission imposed a €110 million fine. This mishandling of personal data led one of WhatsApp’s co-founders to resign. However, the American giant was charged only for lying to investigators, and not for combining the two companies’ databases.

Business assets for digital giants

In the US, when Google and DoubleClick merged in 2007, the Federal Trade Commission, an independent government agency, examined the possibility that the companies could combine their databases, without focusing on the impact of the data on competition.

These initial analyses therefore suggest that access to data can be considered to have a minimal impact on competition. The European Commission nevertheless wished to carry out a more detailed analysis, publishing a report on competition in the digital era. The report shows that data represents a barrier to entry for new companies. The authors therefore recommend that major digital players open up their databases, the argument being that by giving all companies access to data, its value is not concentrated in the hands of a few players who use it to dominate the market. This proposal seems rather unrealistic since data represents a business asset for digital companies.

In any event, this second analysis considers data as an external constraint imposed on digital companies. But it is becoming increasingly clear that access to data can stem from strategic decisions for at least two reasons.

First, access to data is often monopolized by platforms that act as an intermediary between different groups of users in what is referred to as a multi-sided market. The player that controls the platform also controls who has access to its data. It can therefore distort market competition by excluding certain companies.

Data provides market power

Certain Facebook practices were disclosed during a hearing before the British parliament and reported by the New York Times in late 2018. Facebook allegedly favored data exchange with partner apps such as Airbnb, Lyft and Netflix and allegedly cut off access to its data for apps seen as competitors such as Twitter’s Vine application.

The more data, the more market power.

 

Second, a distinctive feature of databases is that they can create synergies when they are merged. Two companies who pool their data, for example in the event of a WhatsApp/Facebook type of merger, can therefore benefit from an exponential increase in information, which, as it becomes increasingly precise, increases the economic value of the consolidated data. Data-driven mergers also change the merged entity’s strategies related to data collection. A firm with access to the highest quality of information will increase its market power, which will in turn give it access to even more information. The link between market power and data collection is therefore strengthened, leading  to the emergence of dominant players.

In conclusion, access to data ensures a basis for healthy competition in digital markets. The accumulation of data leads to the emergence of dominant players who can then influence competition between third parties, by granting or refusing access their data. It is therefore important to verify that data collection is in compliance with applicable regulations (GDPR). It is no longer only a matter of personal data protection; it has become a competition law issue.

[divider style=”normal” top=”20″ bottom=”20″]

Patrick Waelbroeck, Professor of Economics, Télécom Paris – Institut Mines-Télécom and Antoine Dubus, PhD student, Télécom Paris – Institut Mines-Télécom

The original version of this article (in French) was published in The Conversation. Read the original article.

cerveau, brain

Imaging to help people with brain injuries

People with brain injuries have complex cognitive and neurobiological processes. This is the case for people who have suffered a stroke, or who are in a minimally conscious state and close to a vegetative state. At IMT Mines Alès, Gérard Dray is working on new technology involving neuroimaging and statistical learning. This research means that we can improve how we observe patients’ brain activity. Ultimately, his studies could greatly help to rehabilitate trauma patients. 

 

As neuroimaging technology is becoming more effective, the brain is slowly losing its mystery; and as our ability to observe what is happening inside this organ becomes more accurate, we are opening up numerous possibilities, notably in medicine.  For several years, at IMT Mines Alès, Gérard Dray has been working on new tools to detect brain activity.  More precisely, he is aiming to improve how we record and understand the brain signals recorded by techniques such as electroencephalography (EEG) or infrared spectroscopy (NIRS). In partnership with the University of Montpellier’s research center EuroMov, and Montpellier and Nîmes University Hospitals, Dray is putting his research into application in order to support patients who have suffered heavy brain damage.

Read on I’MTech Technology that decrypts the way our brain works

This is notably the case of stroke victims; a part of their brain does not get enough blood supply from the circulatory system and therefore becomes necrotic. The neurons in this part of the brain die and the patient can lose certain motor functions in their legs or arms. However, this disability is not necessarily permanent. Appropriate rehabilitation can mean that stroke victims regain a part of their motor ability. “This is possible thanks to the plasticity of the brain, which allows the brain to move functions stored in the necrotic zone into a healthy part of the brain,” explains Gérard Dray.

Towards Post-Stroke Rehabilitation

In practice, this transfer happens thanks to rehabilitation sessions. Over several months, a stroke victim who has lost their motor skills is asked to imagine moving the part of their body that they are unable to move. In the first few sessions, a therapist guides the movement of the patient. The patient’s brain begins to associate the request for movement to the feeling of the limb moving; and gradually it recreates these neural connections in a healthy area of the brain. “These therapies are recent, less than 20 years old,” points out the researcher at IMT Mines Alès. However, although they have already proven that they work, they still have several limitations that Dray and his team are trying to overcome.

One of the problems with these therapies is the great uncertainty as to the patient’s involvement. When the therapist moves the limb of the victim and asks them to think about moving it, there is no guarantee that they are doing the exercise correctly. If the patient’s thoughts are not synchronized with their movement, then their rehabilitation will be much slower, and may even become ineffective in some cases. By using neuroimaging, researchers want to ensure that the patient is using their brain correctly and is not just being passive during a kinesiotherapy session. But the researchers want to go one step further. By knowing when the patient is thinking about lifting their arm or leg, it is possible to make a part of rehabilitation autonomous.

With our partners, we have developed a device that detects brain signals, and is connected to an automatic glove,” describes Gérard Dray. “When we detect that the patient is thinking about lifting their arm, the glove carries out the associated movement.” The researcher warns that this cannot and should not replace sessions with a therapist, as these are essential for the patient to understand the rehabilitation system.  However, the device allows the victim to complete the exercises in the sessions by themselves, which speeds up the transfer of brain functions towards a healthy zone.  Like after fracture, stroke patients will often have to go through physiotherapy sessions both at the hospital and at home by themselves.

Un gant connecté à un système de détection de l'activité du cerveau peut aider à la rééducation post-AVC.

A glove which is connected to a brain activity detection system can help post-stroke rehabilitation.

 

The main task for this device is being able to detect the brain signals associated with the movement of the limb.  When observing brain activity, the imaging tools record a constellation of signals associated with all the background activities managed by the brain. The neuronal signal which causes the arm to move gets lost in the crowd of background signals.  In order to isolate it, the researchers use statistical learning tools. The patients are first asked to carry out guided and supervised motor actions, while their neural activity is recorded. Then, they move freely during several sessions, while being monitored by EEG or NIRS technology. Once sufficient data has been collected, the algorithms can categorize the signals by action and can therefore deduce, through real-time neuroimaging, if the patient is in the process of trying to move their arm or not.

In partnership with Montpellier University Hospital, the first clinical trial with the device was carried out on 20 patients. The results were used to test the device. “Although the results are positive, we are still not completely satisfied with them,” admits Dray. “The algorithms only detected the patients’ intention to move their arm in 80% of cases. This means that in two out of ten times, the patient thinks about doing it without us being able to record their thoughts using neuroimaging.” To improve these detection rates, researchers are working on numerous algorithms which categorize brain activity.  “Notably, we are trying to couple imagery techniques with techniques that can detect fainter signals,” he continues.

Detecting Consciousness After Head Trauma

The improvement in these brain activity detection tools is not only useful for post-stroke rehabilitation. The IMT Mines Alès team uses the technology that they have developed on people who have suffered head trauma and whose state of consciousness has been altered. After an accident, a victim who is not responsive, but whose respiratory and circulatory functions are in good condition, can be in several different states. They can be in either a total and normal state of consciousness, a coma, a vegetative state, a state of minimal consciousness, or have locked-in syndrome. “These different states are characterized by two factors, consciousness and being awake,” summarizes Dray. In a normal state, we are both awake and conscious. However, a person who is in a coma is neither awake nor conscious. A person is a vegetative state is awake but not conscious of their surroundings.

According to their different states, patients receive different types of care and have different prospects of recovery. The huge difficulty for doctors is being able to identify patients who are awake without being responsive, but whose state of consciousness is not yet gone. “With these people there is a hope that their state of consciousness will be able to return to normal,” explains the researcher. However, the patient’s state of consciousness is sometimes very weak, and we have to detect it using high-quality neuroimaging tools. For this, Gérard Dray and his team use EEG paired with sound stimuli. He explains the process. “We speak to the person and explain to them that we are going to play them a series of signals which have deep frequencies, in between these signals there will be high-pitched frequencies. We ask them to count the high-pitched frequencies. Their brain will react to each sound. However, when they are played a high-pitched sound, their cognitive response will be more important, as these are the signals which the brain will remember. More precisely, a wave called P300 is generated when we are innervated. In the case of the high-pitched sounds, the patient’s brain will generate this wave in an important way.”

Temporal monitoring of brain activity after an auditory stimulus using an EEG device.

 

The patients who still have a state of consciousness will produce a normal EEG in response to the exercise, despite not being able to communicate or move. However, a victim who is in a vegetative state will not respond to the stimuli. The results of these first clinical trials carried out on patients who had experienced head trauma are currently being analyzed. The first bits of feedback are promising for the researchers, who have already managed to detect differences in P300 wave generation. “Our work is only just beginning,” states Gérard Dray. “In 2015, we started our research on the detection of consciousness, and it’s a very recent field.” With increasing progress in neuroimaging techniques and learning tools, this is an entire field of neurology that is about to undergo major advances.

 

Civiq

CiViQ: working towards implementing quantum communications on our networks

Projets européens H2020End 2018, the CiViQ H2020 European project was launched for a period of three years. The project aims to integrate quantum communication technologies into traditional telecommunication networks. This scientific challenge calls upon Télécom Paris’ dual expertise in both quantum cryptography and optical telecommunication, and will provide more security for communications. Romain Alléaume, a researcher in quantum information, is a member of CiViQ. He explained to us the challenges and context of the project.

 

What is the main objective of the CiViQ project?

Romain Alléaume: The main objective of the project is to make quantum communications technologies and, in particular, consistent quantum communications, much better adapted for use on a fiber-optic communications system. To do this, we want to improve the integration, miniaturization, and interoperability of these quantum communication technologies.

Why do you want to integrate quantum communications into telecommunication networks?

RA: Quantum communications are particularly resistant to interception because they are typically based on the exchange of light pulses containing very few photons.  On such a minuscule scale, any attempt to eavesdrop on the communications and therefore measure them will come up against the fundamental principles of quantum physics. These principles guarantee that the system will disrupt communications sufficiently for the spy to be detected.

Based on this idea, it is possible to develop protocols called Quantum Key Distribution, or QKD. These protocols allow a secret encryption key to be shared with the help of quantum communication.  Unlike in mathematical cryptography, a key exchange through QKD cannot be recorded and therefore cannot be deciphered later on. Thus, QKD offers what is called “everlasting security”. This means that the security will last no matter what the calculating power of the potential attacker.

What will this project mean for the implementation of quantum communications in Europe?

RA: The European Community has launched a large program dedicated to quantum technologies which will run for 10years, called the Quantum Technology Flagship. The aim of the flagship is to accelerate technological development and convert research in these fields into technological innovation.  The CiViQ project is one of the projects chosen for the first stage of this program.  For the first time in a quantum communications project, several telecommunications operators are also taking part: Orange, Deutsche Telekom and Telefonica. So it is an extensive project in the technological development of coherent quantum communications, with research ranging from cointegration with classic forms of communication, to photonic integration. Although CiViQ has to allow the implementation of quantum cryptography on a very large scale, it must also outline the prospects for a universal use of communications. This reinforces security of critical infrastructures by relying on the networks’ physical layer.

What are the technological and scientific challenges which you face?

RA: One of the biggest challenges we face is merging classical optical communications and quantum communications.  In particular, we must work on implementing them jointly on the same optical fiber, using similar, if not identical, equipment.  To do that, we are calling on Télécom ParisTech’s diverse expertise.  I am working with Cédric Ware and Yves Jaouen, specialists in optical telecommunications.   The collaboration allows us to combine our expertise in quantum cryptography and optic networks.  We use a state-of-the-art experimental platform to study classical-quantum conversion in optic communications.

More broadly, how does the project reflect the work of other European projects that you are carrying out in quantum communications?

RA: As well as CiViQ, we are taking part in the OpenQKD project. This is also part of the Quantum Technology Flagship.  The project involves pilot implementations of QKD, with the prospect of Europe developing a quantum communications infrastructure within 10 to 15 years’ time. I am also part of a standardization activity in quantum cryptography, working with the ETSI QKS-Industry Standardization Group. With them, I mainly work on issues such as the cryptographic assessment and certification of QKD technology.

How long have you been involved in developing these technologies?

RA: Télécom Paris has been involved in European research in quantum cryptography and communications for 15 years. In particular, this was through implementing the first European network as part of the SECOQC project, which ran from 2004-2018. We have also taken part in the FP7 Q-CERT project, which focuses on the security of implementing quantum cryptography. More recently, the school has partnered with the Q-CALL H2020 project which focuses on the industrial development of quantum communications. As well as this, the project is working on a possible “quantum internet” in the future. This relies on using quantum communications from start to finish, which is made possible by the increase in the reliability of quantum memories.

In parallel, my colleagues who specialize in optic telecommunications have been developing world-class expertise in coherent optical communications for around a decade.  With this type of communications, CiViQ aims to integrate quantum communications, by relying on the fact that the two techniques are based on the same common signal processing techniques.

What will be the outcomes of the CiViQ project?

RA: We predict that there will be key contributions made to experimental laboratory demonstration of the convergence of quantum and classical communications, with a level of integration that has not yet been achieved.  A collaboration with Orange is also planned, which will involve issues regarding wavelength-division multiplexing. The technology will then be demonstrated between the future Télécom Paris site in Palaiseau, and Orange Labs in Châtillon.

Finally, we predict theoretical contributions to new quantum cryptography protocols, techniques involving proofs of security and the certification of QKD technology, which will have an impact on standardization.

digital identity

The ethical challenges of digital identity

Article written in partnership with The Conversation.
By Armen Khatchatourov and Pierre-Antoine ChardelInstitut Mines-Télécom Business School

[divider style=”normal” top=”20″ bottom=”20″]

[dropcap]T[/dropcap]he GDPR recently came into effect, confirming Europe’s role as an example in personal data protection. However, we must not let it dissuade us from examining issues of identity, which have been redefined in this digital era. This means thinking critically about major ethical and philosophical issues that go beyond the simple question of the protection of personal information and privacy.

Current data protection policy places an emphasis on the rights of the individual. But it does not assess the way in which our free will is increasingly restricted in ever more technologically complex environments, and even less the effects of the digital metamorphosis on the process of subjectification, or the individual’s self-becoming. In these texts, more often than not, we consider the subject as already constituted, capable of exercising their rights, with their own free will and principles. And yet, the characteristic of digital technology, as proposed here, is that it contributes to creating a new form of subjectivity: constantly redistributing the parameters of constraints and incitation, creating the conditions for increased individual malleability. We outline this process in the work Les identités numériques en tension (Digital Identities in Tension), written under the Values and Policies of Personal Information Chair at IMT.

The resources established by the GDPR are clearly necessary in supporting individual initiative and autonomy in managing our digital lives. Nonetheless, the very notions of the user’s consent and control over their data on which the current movement is based are problematic. This is because there are two ways of thinking, which are distinct, yet consistent with one another.

New visibility for individuals

Internet users seem to be becoming more aware of the traces they leave, willingly or not, during their online activity (connection metadata, for example). This may serve as support for the consent-based approach. However, this dynamic has its limits.

Firstly, the growing volume of information collected makes the notion of systematic user consent and control unrealistic, if only due to the cognitive overload it would induce. Also, changes in the nature of technical collection methods, as demonstrated by the advent of connected objects, has led to the increase of sensors collecting data even without the user realizing. The example of video surveillance combined with facial recognition is no longer a mere hypothesis, along with the knowledge operators acquire from these data. This is a sort of layer of digital identity whose content and various possible uses are entirely unknown to the person it is sourced from.

What is more, there is a strong tendency for actors, both from the government and the private sector, to want to create a full, exhaustive description of the individual, to the point of reducing them to a long list of attributes. Under this new power regime, what is visible is reduced to what can be recorded as data, the provision of human beings as though they were simple objects.

Vidéo de surveillance. Mike Mozart/Wikipedia, CC BY

Surveillance video. Mike Mozart/Wikipedia, CC BY.

 

The ambiguity of control

The second approach at play in our ultra-modern societies concerns the application of this paradigm based on protection and consent within the mechanisms of a neo-liberal society. Contemporary society combines two aspects of privacy: considering the individual as permanently visible, and as individually responsible for what can be seen about them. This set of social standards is reinforced each time the user gives (or opposes) consent to the use of their personal information. At each iteration, the user reinforces their vision of themselves as the author and person responsible for the circulation of their data. They also assume control over their data, even though this is no more than an illusion. They especially assume responsibility for calculating the benefits that sharing data can bring. In this sense, the increasing and strict application of the paradigm of consent may be correlated with the perception of the individual becoming more than just the object of almost total visibility. They also become a rational economic agent, capable of analyzing their own actions in terms of costs and benefits.

This fundamental difficulty means that the future challenges for digital identities imply more than just providing for more explicit control or more enlightened consent. Complementary approaches are needed, likely related to users’ practices (not simply their “uses”), on the condition that such practices bring about resistance strategies for circumventing the need for absolute visibility and definition of the individual as a rational economic agent.

Such digital practices should encourage us to look beyond our understanding of social exchange, whether digital or otherwise, under the regime of calculating potential benefits or external factors. In this way, the challenges of digital identities far outweigh the challenges of protecting individuals or those of “business models”, instead affecting the very way in which society as a whole understands social exchange. With this outlook, we must confront the inherent ambivalence and tension of digital technologies by looking at the new forms of subjectification involved in these operations.  A more responsible form of data governance may arise from such an analytical exercise.

[divider style=”normal” top=”20″ bottom=”20″]

Armen Khatchatourov, Lecturer-Researcher, Institut Mines-Télécom Business School and Pierre-Antoine Chardel, Professor of social science and ethics, Institut Mines-Télécom Business School

This article has been republished from The Conversation under a Creative Commons license. Read the original article here.

 

visual impairments

Virtual reality improving the comfort of people with visual impairments

People suffering from glaucoma or retinitis pigmentosa develop increased sensitivity to light and gradually lose their peripheral vision. These two symptoms cause discomfort in everyday life and limit the social activity of the people affected. The AUREVI research project involving IMT Mines Alès aims to improve the quality of life of visually-impaired people with the help of a virtual reality headset.

 

Retinitis pigmentosa and glaucoma are degenerative diseases of the eye. While they have different causes, they result in similar symptoms: increased sensitivity to changes in light and gradual loss of peripheral vision. The AUREVI research project was launched in 2013 in order to help overcome these deficiencies. Over 6 years, the project has brought together researchers from IMT Mines Alès and Institut ARAMAV in Nîmes, which specializes in rehabilitation and physiotherapy for people with visual impairments. Together, the two centers are developing a virtual reality-based solution to improve the daily lives of patients with retinitis pigmentosa or glaucoma.

“For these patients, any light source can cause discomfort” explains Isabelle Marc, a researcher in image processing at IMT Mines Alès, working on the AUREVI project. A computer screen, for example, can dazzle them. When walking around outdoors, the changes in light between shady and bright areas, or even breaks in the clouds, can be violently dazzling. “For visually impaired people, it takes much longer for the eye to adjust to different levels of light than it does for healthy people” the researcher adds. “While it usually takes a few seconds before we can open our eyes after being dazzled or to be able to see better in a shady area, these patients need several tens of seconds, sometimes several minutes.

Controlling light

With the help of a virtual reality headset, the AUREVI team offers greater control over light levels for visually impaired people with retinitis or glaucoma. Cameras display the image which would normally be seen by the eyes on the screens of the headset. When there is a sudden change in the light, image processing algorithms alter the brightness of the image in order to keep it constant in the patient’s eyes. For the researchers, the main difficulty with this tool is the delay. “We would like it to be effective in real time. We are aiming for the shortest delay between what appears on the screen of the headset and what the user really sees” says Isabelle Marc. The team is therefore using logarithmic cameras, which record HDR (High Dynamic Range) images directly, thus reducing the processing time.

The headset is designed to replace the dark glasses usually worn by people with this type of pathology. “It’s a pair of adaptive dark sunglasses. The shade varies pixel by pixel” Isabelle Marc explains. An advantage of this tool is that it can be calibrated to suit each patient. Depending on the stage of the retinitis or glaucoma, the level of sensitivity will be different. This can be accounted for in the way the images are processed. To do so, scientists have developed a specific test for evaluating the degree of light a person can bear. “This test could be used to configure the tool and adapt it optimally for each user” says the researcher.

The first clinical trials of the headset began on fifteen people in 2016. The initial goal was to measure the light levels considered as comfortable by each person, and to gather feedback on the comfort of the tool, before looking at evaluating the service provided to people with visual impairments. For this, the researchers create variations in the brightness of a screen for patients wearing a headset, who then give their feedback. Isabelle Marc reports that “the initial feedback from patients shows that they prefer the headset over other tools for controlling light levels”. However, the testers also commented on the bulk of the tool. “For now, we are working with the large headsets available on the market, which are not designed to be worn when you are walking around” the researcher concedes. “We are currently looking for industrial partners who could help us make the shift to a pre-series prototype more suitable for walking around with.

Showing what patients can’t see

Being able to control light levels is a major improvement in terms of visual comfort for patients, but the researchers want to take things even further. The AUREVI project aims to compensate for another symptom caused by glaucoma and retinitis: loss of stereo vision. Patients gradually lose degrees in their visual field, reaching 2 or 3 degrees at around 60 years old, or even full blindness. Before this last stage comes an important step in the progression of the handicap, as Isabelle Marc describes: “Once the vision goes below 20 degrees of the visual field, the images in the eye no longer cross over, and the brain cannot reconstruct the 3D information.

Par des techniques de traitement d'image, le projet AUREVI veut donner aux personnes malvoyantes des indications sur les obstacles à proximité.

Using image processing techniques, the AUREVI project hopes to give people with visual impairments indications about nearby objects

 

Without stereoscopic vision, the patient can no longer perceive depth of field. One of the future steps of the project will be to incorporate a feature into the headset to compensate for this deficiency in three-dimensional vision. The researchers are working on methods for communicating information on depth. They are currently looking at the idea of displaying color codes. A close object would be colored red, for example, and a far object blue. As well as improving comfort, this feature would also provide greater safety. Patients suffering from an advanced stage of glaucoma or retinitis do not see objects above their head which could hurt them, nor do they see those at their feet which are a tripping hazard.

Losing information about their surroundings gives people with visual impairments the feeling of being in danger, which increases as the symptoms get worse. Combined with an increasing discomfort with changes in light levels, this fear can often lead to social exclusion. “Patients tend to go out less, especially outdoors” notes Isabelle Marc. “On a professional level, their refusal to participate in activities outside of work with their colleagues is often misunderstood. They may still have good sight for reading, for example, and so people with normal sight may have a hard time understanding their handicap. Therefore, their social circle gradually shrinks. The headset developed by the AUREVI project is an opportunity to improve social integration for people with visual impairments. For this reason, it receives financial support from several companies as part of their disabilities and diversity missions, in particular: Sopra Steria, Orano, Crédit Agricole and Thalès. The researchers rely on this aid for people with disabilities in developing their project.

Qualcomm

Qualcomm, EURECOM and IMT joining forces to prepare the 5G of the future

Belles histoires, Bouton, Carnot5G is moving into the second phase of its development, which will bring a whole host of new technological challenges and innovations. Research and industry stakeholders are working hard to address the challenges posed by the next generation of mobile communication. In this context, Qualcomm, EURECOM and IMT recently signed a partnership agreement also including France Brevets. What is the goal of the partnership? To better prepare 5G standards and release technologies from laboratories as quickly as possible. Raymond Knopp, a researcher in communication systems at EURECOM, presents the content and challenges of this collaboration.

 

What do you gain from the partnership with Qualcomm and France Brevets?

Raymond Knopp: As researchers, we work together on 5G technologies. In particular, we are interested in those which are closely examined by 3GPP, the international standards organization for telecommunication technologies. In order to apply our research outside our laboratories, many of our projects are carried out in collaboration with industrial partners. This gives us more relevance in dealing with the real-world problems facing technology. Qualcomm is one of these industrial partners and is one of the most important companies in the generation of intellectual property in 4G and 5G systems. In my view, it is also one of the most innovative in the field. The partnership with Qualcomm gives us a more direct impact on technology development. With additional support from France Brevets, we can play a more significant role in defining the standards for 5G. We have a lot to learn from the intellectual property generation, and these partners provide us with this knowledge.

What technologies are involved in the partnership?

RK: 5G is currently moving into its second phase. The first phase was aimed at introducing new network architecture aspects and new frequencies. This meant increasing the frequency bands by about 5 or 6 times. This phase is now operational, and so innovations are secondary. The technologies we are working on now are mainly for the second phase. It is oriented more towards private networks, for applications involving machines and vehicles, new network control systems, etc. Priority will be given to network division and software-defined network (SDN) technologies, for example. This is also the phase in which low latency and very highly robust communication will be developed. This is the type of technology we are working on under this partnership.

Are you already thinking of the implementation of the technologies developed in this second phase?

RK: For now, our work on implementation is very much aimed at the first-phase technologies. We are involved in the H2020, 5Genesis and 5G-Eve projects, for conducting tests on 5G, both for mobile terminals and the network side of things. These trials involve our platform OpenAirInterface. For now, the implementation of second-phase technologies is not a priority. Nevertheless, intellectual property and any standards generated in the partnership with Qualcomm could potentially undergo implementation tests on our platform. However, it will be some time before we reach that stage.

What does a partnership with an industrial group like this represent for an academic researcher like yourself?

RK: It is an opportunity to close the loop between research, prototyping, standards and industrialization, and to see our work applied directly to the 5G technologies we will be using tomorrow. In the academic world in general, we tend to be uni-directional. We write publications, and some of them contain issues that could be included in standards, but this isn’t done and they are left accessible to everyone. Of course, companies go on to use them without our involvement, which is a pity. By setting up partnerships like this one with Qualcomm, we learn to appreciate the value of our technologies and developing them together. I hope it will encourage more researchers to the same. The field of academic research in France needs to be aware of the importance of closely following the standards and industrialization process!

 

IA, AI

The unintelligence of artificial intelligence

Despite the significant advances made by artificial intelligence, it still struggles to copy human intelligence. Artificial intelligence remains focused on performing tasks, without understanding the meaning of its actions, and its limitations are therefore evident in the event of a change of context or when the time comes to scale up. Jean-Louis Dessalles outlines these problems in his latest book entitled Des intelligences très artificielles (Very Artificial Intelligence). The Télécom Paris researcher also suggests avenues for investigation into creating truly intelligent artificial intelligence. He presents some of his ideas in the following interview with I’MTech.

 

Can artificial intelligence (AI) understand what it is doing?

Jean-Louis Dessalles: It has happened, yes. It was the case with the SHRDLU program, for example, an invention by Terry Winograd during his thesis at MIT, in 1970. The program simulated a robot that could stack blocks and speak about what it was doing at the same time. It was incredible, because it was able to justify its actions. After making a stack, the researchers could ask it why it had moved the green block, which they had not asked it to move. SHRDLU would reply that it was to make space in order to move the blocks around more easily. This was almost 50 years ago and has remained one of the rare isolated cases of programs capable of understanding their own actions. These days, the majority of AI programs cannot explain what they are doing.

Why is this an isolated case?

JLD: SHRDLU was very good at explaining how it stacked blocks in a virtual world of cubes and pyramids. When the researchers wanted to scale the program up to a more complex world, it was considerably less effective. This type of AI became something which was able to carry out a given task but was unable to understand it. Recently, IBM released Project Debater, an AI program that can debate in speech competitions. It is very impressive, but if we analyze what the program is doing, we realize it understands very little. The program searches the Internet, extracts phrases which are logically linked, and puts them together in an argument. When the audience sees listens, it has the illusion of a logical construction, but it is simply a compilation of phrases from a superficial analysis. The AI program doesn’t understand the meaning of what it says.

IBM’s Project Debater speaking on the statement “We should subsidize preschools”

 

Does it matter whether AI understands, as long as it is effective?

JLD: Systems that don’t understand end up making mistakes that humans wouldn’t make. Automated translation systems are extremely useful, for example. However, they can make mistakes on simple words because they do not understand implicit meaning, when even a child could understand the meaning due to the context. The AI behind these programs is very effective as long as it remains within a given context, like SHRDLU. As soon as you put it into an everyday life situation, when you need it to consider context, it turns out to be limited because it does not understand the meaning of what we are asking it.

Are you saying that artificial intelligence is not intelligent?

JLD: There are two fundamental, opposing visions of AI these days. On the one hand, a primarily American view which focuses on performance, on the other hand, Turing’s view that if an AI program cannot explain what it is doing or interact with me, I will not call it “intelligent”. From a utilitarian point of view, the first vision is successful in many ways, but it comes up against major limitations, especially in problem-solving. Take the example of a connected building or house. AI can make optimal decisions, but if the decisions are incomprehensible to humans, they will consider the AI stupid. We want machines to be able to think sequentially, like us: I want to do this, so I have to change that; and if that creates another problem, I will then change something else. The machine’s multi-criteria optimization sets all the parameters at the same time, which is incomprehensible to us. It will certainly be effective, but ultimately the human will be the one judging whether the decision made is appropriate or not, according to their values and preferences, including their will to understand the decision.

Why can’t a machine understand the meaning of the actions we ask of it?

JLD: Most of today’s AI programs are based on digital techniques, which do not incorporate the issue of representation. If I have a problem, I set the parameters and variables, and the neural network gives me a result based on a calculation I cannot understand. There is no way of incorporating concepts or meaning. There is also work being done on ontologies. Meaning is represented in the form of preconceived structures where everything is explicit: a particular idea or concept will be paired with a linguistic entity. For example, to give a machine the meaning of the word “marriage”, we will associate it with a conceptual description based on a link between a person A and a person B, and the machine will discover for itself that there is a geographical proximity between these two people, they live in the same place, etc. Personally, I don’t believe that ontologies will bring us closer to an AI which understands what it is doing, and thus one that is truly intelligent under Turing’s definition.

What do you think is the limitation of ontologies?

JLD: They too have difficulty being scaled up. For the example of marriage, the challenge lies in giving the machine the full range of meaning that humans attribute to this concept. Depending on an individual’s values and beliefs, their idea of marriage will differ. Making AI understand this requires constructing representations that are complex, sometimes too complex. Humans understand a concept and its subtleties very quickly, with very little initial description. Nobody spends hours on end teaching a child what a cat is. The child does it alone by observing just a few cats and finding the common point between them. For this, we use special cognitive mechanisms including looking for simplicity, which enables us to reconstruct the missing part of a half-hidden object, or to understand the meaning of a word which can have several different meanings.

What does AI lack in order to be truly intelligent and acquire this implicit knowledge?

JLD: Self-observation requires contrast, which is something AI lacks. The meaning of words changes with time and depending on the situation. If I say to you: “put this in the closet”, you will know which piece of furniture to turn to, even though the closet in your office and the one in your bedroom do not look alike, neither in their shape or in what they contain. This is what allows us to understand very vague concepts like the word “big”. I can talk about “big bacteria” or a “big galaxy” and you will understand me, because you know that the word “big” does not have an absolute meaning. It is based on a contrast between the designated object and the typical corresponding object, depending on the context. Machines do not yet know how to do this. They would recognize the word “big” as a characteristic of the galaxy, but using “big” to describe bacteria would make no sense to it, for example. They need to be able to make contrasts.

Is this feasible?

JLD: Quite likely, yes. But we would have to augment digital techniques to do so. AI designers are light years away from being able to address this type of question. What they want to figure out, is how to improve the performance of their multi-layer neural networks. They do not see the point of striving towards human intelligence. IBM’s Project Debater is a perfect illustration of this: it is above all about classification, with no ability to make contrasts. On the face of it, it is very impressive, but it is not as powerful as human intelligence, with its cognitive mechanisms for extrapolating and making arguments. The IBM program contrasts phrases according to the words they contain, while we contrast them based on the ideas they express. In order to be truly intelligent, AI will need to go beyond simple classification and attempt to reproduce, instead of mime, our cognitive mechanisms.

 

lumière, intelligence artificielle, artificial intelligence, AI

Light, a possible solution for a sustainable AI

Maurizio Filippone, Professor at EURECOM, Institut Mines-Télécom (IMT)

[divider style=”normal” top=”20″ bottom=”20″]

We are currently witnessing a rapidly growing adoption of artificial intelligence (AI) in our everyday lives, which has the potential to translate into a variety of societal changes, including improvements to economy, better living conditions, easier access to education, well-being, and entertainment. Such a much anticipated future, however, is tainted with issues related to privacy, explainability, accountability, to name a few, that constitute a threat to the smooth adoption of AI, and which are at the center of various debates in the media.

A perhaps more worrying aspect is related to the fact that current AI technologies are completely unsustainable, and unless we act quickly, this will become the major obstacle to the wide adoption of artificial intelligence in society.

AI and Bayesian machine learning

But before diving into the issues of sustainability of AI, what is AI? AI aims at building artificial agents capable of sensing and reasoning about their environment, and ultimately learning by interacting with it. Machine Learning (ML) is an essential component of AI, which makes it possible to establish correlations and causal relationships among variables of interest from data and prior knowledge of the processes characterizing the agent’s environment.

For example, in life sciences, ML can be helpful to determine the relationship between grey matter volume and the progression of Alzheimer disease, whereas in environmental sciences it can be useful to estimate the effect of CO2 emissions on climate. One key aspect of some ML techniques, in particular Bayesian ML, is the possibility to do this by account for the uncertainty due to the lack of knowledge of the system, or the fact that a finite amount of data is available.

Such uncertainty is of fundamental importance in decision making when the cost associated with different outcomes is unbalanced. A couple of examples of domains where AI can be of tremendous help include a variety of medical scenarios (e.g., diagnosis, prognosis, personalised treatment), environmental sciences (e.g., climate, earthquake/tsunami), and policy making (e.g., traffic, tackling social inequality).

Unsustainable AI

Recent spectacular advances in ML have contributed to an unprecedented boost of interest in AI, which has triggered huge amounts of private funding into the domain (Google, Facebook, Amazon, Microsoft, OpenAI). All this is pushing the research in the field, but it is somehow disregarding its impact on the environment. The energy consumption of current computing devices is growing at an uncontrolled pace. It is estimated that within the next ten years the power consumption of computing devices will reach 60% of the total amount of energy that will be produced, and this will become completely unsustainable by 2040.

Recent studies show that the ICT industry today is generating approximately 2% of global CO₂ emissions, comparable to the worldwide aviation industry, but the sharp growth curve forecast for ICT-based emissions is truly alarming and far outpaces aviation. Because ML and AI are fast growing ICT disciplines, this is a worrying perspective. Recent studies show that the carbon footprint of training a famous ML model, called auto-encoder, can pollute as much as five cars in their lifetime.

If, in order to create better living conditions and improve our estimation of risk, we are impacting the environment to such a wide extent, we are bound to fail. What can we do to radically change this?

Let there be light

Transistor-based solutions to this problem are starting to appear. Google developed the Tensor Processing Unit (TPU) and made it available in 2018. TPUs offer much lower power consumption than GPUs and CPUs per unit of computation. But can we break away from transistor-based technology for computing with lower power and perhaps faster? The answer is yes! In the last couple of years, there have been attempts to exploit light for fast and low-power computations. Such solutions are somewhat rigid in the design of the hardware and are suitable for specific ML models, e.g., neural networks.

Interestingly, France is at the forefront in this, with hardware development from private funding and national funding for research to make this revolution a concrete possibility. The French company LightOn has recently developed a novel optics-based device, which they named Optical Processing Unit (OPU).

“Optical computing leading the AI scale-up”, Igor Carron, CEO, LightOn (CognitionX video, 2018).

 

In practice, OPUs perform a specific operation, which is a linear transformation of input vectors followed by a nonlinear transformation. Interestingly, this is done in hardware exploiting the properties of scattering of light, so that in practice these computations happen at the speed of light and with low power consumption. Moreover, it is possible to handle very large matrices (in the order of millions of rows and columns), which would be challenging with CPUs and GPUs. Due to the scattering of light, this linear transformation is the equivalent of a random projection, e.g. the transformation of the input data by a series of random numbers whose distribution can be characterized. Are random projections any useful? Surprisingly yes! A proof-of-concept that this can be useful to scale computations for some ML models (kernel machines, which are alternative to neural networks) has been reported here. Other ML models can also leverage random projections for prediction or change point detection in time series.

We believe this is a remarkable direction to make modern ML scalable and sustainable. The biggest challenge for the future, however, is how to rethink the design and the implementation of Bayesian ML models so as to be able to exploit the computations that OPUs offer. Only now we are starting developing the methodology needed to fully take advantage of this hardware for Bayesian ML. I’ve recently been awarded a French fellowship to make this happen.

It’s fascinating how light and randomness are not only pervasive in nature, they’re also mathematically useful for performing computations that can solve real problems.

[divider style=”normal” top=”20″ bottom=”20″]

Created in 2007 to help accelerate and share scientific knowledge on key societal issues, the Axa Research Fund has been supporting nearly 600 projects around the world conducted by researchers from 54 countries. To learn more, visit the site of the Axa Research Fund.The Conversation

Maurizio Filippone, Professor at EURECOM, Institut Mines-Télécom (IMT)

Cet article est republié à partir de The Conversation sous licence Creative Commons. Lire l’article original.