electromagnetic waves

Our exposure to electromagnetic waves: beware of popular belief

Joe Wiart, Télécom ParisTech – Institut Mines-Télécom, Université Paris-SaclayJoe Wiart, Exposition ondes électromagnétiques

This article is published in partnership with “La Tête au carré”, the daily radio show on France Inter dedicated to the popularization of science, presented and produced by Mathieu Vidard. The author of this text, Joe Wiart, discussed his research on the show broadcast on April 28, 2017 accompanied by Aline Richard, Science and Technology Editor for The Conversation France.

 

For over ten years, controlling exposure to electromagnetic waves and to radio frequencies in particular has fueled many debates, which have often been quite heated. An analysis of reports and scientific publications devoted to this topic shows that researchers are mainly studying the possible impact of mobile phones on our health. At the same time, according to what has been published in the media, the public is mainly concerned about base stations. Nevertheless, mobile phones and wireless communication systems in general are widely used and have dramatically changed how people around the world communicate and work.

Globally, the number of mobile phone users now exceeds 5 billion. And according to the findings of an Insee study, the percentage of individuals aged 18-25 in France who own a mobile phone is 100%! It must be noted that the use of this method of communication is far from being limited to simple phone calls — by 2020 global mobile data traffic is expected to represent four times the overall internet traffic of 2005.  In France, according to the French regulatory authority for electronic and postal communications (ARCEP), over 7% of the population connected to the internet exclusively via smartphones in 2016. And the skyrocketing use of connected devices will undoubtedly accentuate this trend.

 

electromagnetic waves

Smartphone Zombies. Ccmsharma2/Wikimedia

 

The differences in perceptions of the risks associated with mobile phones and base stations can be explained in part by the fact that the two are not seen as being related. Moreover, while exposure to electromagnetic waves is considered to be “voluntary” for mobile phones, individuals are often said to be “subjected” to waves emitted by base stations. This helps explains why, despite the widespread use of mobiles and connected devices, the deployment of base stations remains a hotly debated issue, often focusing on health impacts.

In practice, national standards for limiting exposure to electromagnetic waves are based on the recommendations of the International Commission on Non-Ionizing Radiation Protection (ICNIRP) and on scientific expertise. A number of studies have been carried out on the potential effects of electromagnetic waves on our health. Of course, research is still being conducted in order to keep pace with the constant advancements in wireless technology and its many uses. This research is even more important since radio frequencies from mobile telephones have now been classified as “possibly carcinogenic for humans” (group 2B) following a review conducted by the International Agency for Research on Cancer.

Given the great and ever-growing number of young people who use smartphones and other mobile devices, this heightened vigilance is essential. In France the National Environmental and Occupational Health Research Programme (PNREST) of the National Agency for Food, Environmental and Occupational Health Safety (Anses) is responsible for monitoring the situation. And to address public concerns about base stations (of which there are 50,000 located throughout France), many municipalities have discussed charters to regulate where they may be located. Cities such as Paris, which, striving to set an example for France and major European cities, signed such a charter as of 2003, are officially limiting exposure from base stations through a signed agreement with France’s three major operators.

Exposition ondes électromagnétiques, Joe Wiart

Hillside in Miramont, Hautes Pyrenees France. Florent Pécassou/Wikimedia

This charter was updated in 2012 and was further discussed at the Paris Council in March, in keeping with the Abeille law, which was proposed to the National Assembly in 2013 and passed in February 2015, focusing on limiting the exposure to electromagnetic fields. Yet it is important to note that this initiative, like so many others, concerns only base stations despite the fact that exposure to electromagnetic waves and radio frequencies comes from many other sources. By focusing exclusively on these base stations, the problem is only partially resolved. Exposure from mobile phones for users or their neighbors must also be taken into consideration, along with other sources.

In practice, the portion of exposure to electromagnetic waves which is linked to base stations is far from representing the majority of overall exposure. As many studies have demonstrated, exposure from mobile phones is much more significant.  Fortunately, the deployment of 4G, followed by 5G, will not only improve speed but will also contribute to significantly reducing the power radiated by mobile phones. Small cell network architecture with small antennas supplementing larger ones will also help limit radiated power.  It is important to study solutions resulting in lower exposure to radio frequencies at different levels, from radio devices to network architecture or management and provision of services. This is precisely what the partners in the LEXNET European project set about doing in 2012, with the goal of cutting public exposure to electromagnetic fields and radio frequency in half.

In the near future, fifth-generation networks will use several frequency bands and various architectures in a dynamic fashion, enabling them to handle both increased speed and the proliferation of connected devices. There will be no choice but to effectively consider the network-terminal relationship as a duo, rather than treating the two as separate elements. This new paradigm has become a key priority for researchers, industry players and public authorities alike. And from this perspective, the latest discussions about the location of base stations and renewing the Paris charter prove to be emblematic.

 

Joe Wiart, Chairholder in research on Modeling, Characterization and Control of Exposition to Electromagnetic Waves at Institut Mines Telecom, Télécom ParisTech – Institut Mines-Télécom, Université Paris-Saclay

This article was originally published in French in The Conversation France The Conversation

Honoris Causa

IMT awards the title of Doctor Honoris Causa to Jay Humphrey, Professor at Yale University

This prestigious honor was awarded on 29 June at Mines Saint-Etienne by Philippe Jamet, President of IMT, in the presence of many important scientific, academic and institutional figures. IMT’s aim was to honor one of the inventors and pioneers of a new field of science – mechanobiology – which studies the effects of mechanical stress (stretches, compressions, shearing, etc.) on cells and living tissue.

 

A world specialist in cardiovascular biomechanics, Jay D. Humphrey has worked tirelessly throughout his career to galvanize the biomechanical engineering community and draw attention to the benefits that this science can offer to improve medicine.

Jay D. Humphrey works closely with the Engineering & Health Center (CIS) of Mines Saint-Etienne. In 2014, he invited Stéphane Avril, Director of the CIS, to Yale University to work on biomechanics applied to soft tissues and the prevention of ruptured aneurysms, which notably led to the award of two grants from the prestigious European Research Council:

 Biomechanics serving Healthcare

For Christian Roux, Executive Vice President for Research and Innovation at IMT, “With this award the institute wanted to recognize this important scientist, known throughout the world for the quality of his work, his commitment to the scientific community and his strong human and ethical values. Professor Humphrey also leads an exemplary partnership with one of IMT’s most cutting-edge laboratories, offering very significant development opportunities.

[author title=”biography of Jay D. Humphrey” image=”https://imtech-test.imt.fr/wp-content/uploads/2017/07/PortraitJHumphrey.jpg”]

Jay Humphrey is Professor and Chair of the Biomedical Engineering Department of the prestigious Yale University in the United States. He holds a PhD in mechanical engineering from the Georgia Institute of Technology (Atlanta, United States) and a post-doctorate in cardiovascular medicine from John Hopkins University (Baltimore, United States).

He chaired the scientific committee of the World Congress of Biomechanics in 2014, held in Boston and attended by more than 4,000 people.

He co-founded the Biomechanics and modeling in mechanobiology journal in 2002, which today plays a leading role in the field of biomechanics.

Jay D. Humphrey has written a large number of papers (245+) which have been universally praised and cited countless times (25,000+). His works are considered essential references and engineering students throughout the world rely on his introductions to biomechanics and works on cardiovascular biomechanics.

He is heavily involved in the training and support for students – from Master’s degrees to PhDs – and more than a hundred students previously under his supervision now hold posts in top American universities and major international businesses, such as Medtronic.

Jay D. Humphrey has already received a number of prestigious awards. He plays an influential role in numerous learned societies, and in the assessment committees of the National Institute of Health (NIH) in the United States.[/author]

 

energy transitions

Energy Transitions: The challenge is a global one, but the solutions are also local

For Bernard Bourges, there is no doubt: there are multiple energy transitions. He is a researcher at IMT Atlantique studying the changes in the energy sector, and takes a multi-faceted view of the changes happening in this field. He associates specificities with each situation, each territory, which instead of providing an overall solution, give a multitude of responses to the great challenges in energy today. This is one of the central points in the “Energy Transitions: mechanisms and levers” MOOC which he is running from 15 May until 17 July 2017. On this occasion, he gives us his view of the current metamorphosis in the field of energy.

 

You prefer to talk about energy transitions in the plural, rather than the energy transition. Why is the plural form justified?

Bernard Bourges: There is a lot of talk about global challenges, the question of climate change, and energy resources to face the growing population and economic development. This is the general framework, and it is absolutely undeniable. But, on the scale of a country, a territory, a household, or a company, these big challenges occur in extremely different ways. The available energy resources, the level of development, public policy, economic stakes, or the dynamics of those involved, are parameters which change between two given situations, and which have an impact on the solutions put in place.

 

Is energy transition different from one country to another?

BB: It can be. The need to switch energy model in order to reduce global warming is absolutely imperative. In vast regions, global warming is a matter of life or death for populations, like in the Pacific Islands. On the contrary, in some cases, the rising temperatures may even be seen as an opportunity for economic development: countries like Russia or Canada will gain new cultivatable land. There are also contradictions in terms of resources. The development of renewable energies means that countries with a climate suited to solar or wind power production will have greater energy independence. Also, technical advances and melting ice caps are making some fossil fuel deposits more accessible, which had previously been too costly to use. This implies a multitude of opportunities, some of which are dangerously tempting, and contradictory interests, often within the same country or the same company.

 

You highlight the importance of economic stakes. What about political decisions?

BB: Of course, there is an important political dimension, as there is a wide range of possibilities. To make the system more complex, energy is an element which overlaps with other environmental challenges, as well as social ones like employment. This results in a specific alchemy. Contradictory choices will arise, according to the importance politicians place on these great problems of society. In France as in other countries, there is a law on energy transition. But this does not mean that this apparent, inferred unanimity is real. It is important to realize that behind the scenes, there may be strong antagonism. This conditions political, social and even technological choices.

 

“Behind the question of energy, there are physical laws, and we cannot just do what we want with them.”

 

On the question of technology, there is a kind of optimism which consists in believing that science and innovation will solve the problem. Is it reasonable to believe this?

BB: This feeling is held by part of the population. We need to be careful about this point, as it is also marketing speak used to sell solutions. However, it is very clear that technology will greatly contribute to the solutions put in place, but for now there is no miracle cure. Technology will probably never be capable of satisfying all needs for growth, at a reasonable cost, and without a major impact on the climate or the environment. I often see inventors pop up, promising perpetual movement or 100% productivity rates, or even more. It’s absurd! Behind the question of energy, there are physical laws, and we cannot just do what we want with them.

 

What about the current technologies for harvesting renewable resources? They seem satisfactory on a large scale.

BB: The enthusiasm needs to be tempered. For example, there is currently a lot of enthusiasm surrounding solar power, to the point where some people imagine all households on the planet becoming energy independent thanks to solar panels. However, this utopia has a technological limit. The sun is an intermittent resource, it is only available for half the day, and only in fine weather. This energy must therefore be stored in batteries. But batteries use rare resources such as lithium, which are not limitless. Extracting these resources has environmental impacts. What could be a solution for several tens of millions of Europeans can therefore become a problem for billions of other people. This is one of the illustrations of the multifaceted nature of energy transitions, which we highlight in our MOOC.

 

Does this mean we should be pessimistic about the potential solutions provided by natural resources?

BB: The ADEME carried out a study on a 100% renewable electricity mix by 2050. One of the most symbolic conclusions was that it is possible, but that we will have to manage demand. This implies being sure that new types of usage will not appear. But this is difficult, as innovations will result in a drop in energy prices. If the costs decrease, the result will be that new types of use are made possible, which will increase demand. The realistic solution is to use a combination of solutions that use renewable resources (locally or on a large scale), intelligent management of energy networks, and innovative technologies. Managing demand is not only based on technological solutions, but also on changes in organization and behavior. Each combination will therefore be specific to a given territory of situation.

 

Doesn’t this type of solution make energy management more complex for consumers, whether individuals or companies? 

BB: This question is typical of the mistake people often make, that of limiting the question of energy to electricity. Energy is certainly a question of electricity usage, but also thermal needs, heating, and mobility. The goal for mobility will be to switch to partially electric transport modes, but we are not there yet, as this requires a colossal amount of investment. For thermal needs, the goal is to reduce demand by increasing the energy efficiency of buildings. Electricity is really only a third of the problem. Local solutions must also provide answers to other uses of energy, with completely different types of action. Having said this, electricity does take center-stage, as there are great changes underway. These changes are not only technological but also institutional (liberalization for example), difficult to understand, and sometimes even misleading for consumers.

 

What do you mean by that?

BB: For the moment, we cannot differentiate between the electrons in the network. No provider can tell you at a given moment whether you are receiving electricity produced by a wind farm, or generated by a nuclear power plant. We therefore must be wary of energy providers who tell us the opposite. This is another physical constraint. There are also legal and economic constraints. But we have understood that in this time of great change, there are many actors who are trying to win, or at least trying not to lose.

This is also why we are running this MOOC. The consumer needs to be helped in understanding the energy chain: where does energy come from? What are the basic physical laws involved? We have to try and decipher these points. But, in order to understand energy transitions, we also have to identify the constraints linked specifically to human societies and organizations. This is another point we present in the MOOC, and we make use of the diverse range of skills of people at IMT’s schools and external partners.

 

This article is part of our dossier Digital technology and energy: inseparable transitions!

 

[divider style=”normal” top=”20″ bottom=”20″]

The MOOC “Energy Transitions: mechanisms and levers” in brief

The MOOC “Energy Transitions: mechanisms and levers” at IMT is available (in French) on the “Fun” platform. It will take place from 15 May to 17 July 2017. It is aimed at both consumers wanting to gain a better understanding of energy, and professionals who want to identify specific levers for their companies.

[divider style=”normal” top=”20″ bottom=”20″]

 

4D Imaging, Mohamed Daoudi

4D Imaging for Evaluating Facial Paralysis Treatment

Mohamed Daoudi is a researcher at IMT Lille Douai, and is currently working on an advanced system of 4-dimensional imaging to measure the after-effects of peripheral facial paralysis. This tool could prove especially useful to practitioners in measuring the severity of the damage and in their assessment of the efficacy of treatment.

 

Paralysis began with my tongue, followed by my mouth, and eventually the whole side of my face”. There are many accounts of facial paralysis on forums. Whatever the origin may be, if the facial muscles are no longer responding, it is because the facial nerve stimulating them has been affected. Depending on the part of the nerve affected, the paralysis may be peripheral, in this case affecting one of the lateral parts of the face (or hemifacial), or may be central, affecting the lower part of the face.

In the case of peripheral paralysis, there are so many internet users enquiring about the origin of this problem precisely because in 80% of cases the paralysis occurs without apparent case. However, there is total recovery in more than 85 to 90% of cases. The other common causes of facial paralysis are facial trauma, and vascular or infectious causes.

During the follow-up treatment, doctors try to re-establish facial symmetry and balance in a resting position and for a facial impression. This requires treating the healthy side of the face as well as the affected side. The healthy side often presents hyperactivity, which makes it look as if the person is grimacing and creates paradoxical movements. Many medical, surgical, and physiotherapy procedures are used in the process. One of the treatments used is to inject botulinum toxin. This partially blocks certain muscles, creating balance and facial movement.

Nonetheless, there is no analysis tool that can quantify the facial damage and give an objective observation of the effects of treatment before and after injection. This is where IMT Lille Douai researcher Mohamed Daoudi[1] comes in. His specialty is 3D statistical analysis of shapes, in particular faces. He especially studies the dynamics of faces and has created an algorithm on the analysis of facial expressions making it possible to quantify deformations of a moving face.

 

Smile, you’re being scanned

Two years ago, a partnership was created between Mohamed Daoudi, Pierre Guerreschi, Yasmine Bennis and Véronique Martinot from the reconstructive and aesthetic plastic surgery department at the University Hospital of Lille. Together they are creating a tool which makes a 3D scan of a moving face. An experimental protocol was soon set up.[2]

The patients are asked to attend a 3D scan appointment before and after injecting botulinum toxin. Firstly, we ask them to make stereotypical facial expression, a smile, or raising their eyebrows. We then ask them to pronounce a sentence which triggers a maximum number of facial muscles and also tests their spontaneous movement”, explains Mohamed Daoudi.

The 4D results pre- and post-injection are then compared. The impact of the peripheral facial paralysis can be evaluated, but also quantified and compared. In this sense, the act of smiling is far from trivial. “When we smile, our muscles contract and the face undergoes many distortions. It is the facial expression which gives us the clearest image of the asymmetry caused by the paralysis”, the researcher specifies.

The ultimate goal is to manage to re-establish a patient’s facial symmetry when they smile. Of course, it is not a matter of symmetry, as no face is truly symmetrical. We are talking about socially accepted symmetry. The zones stimulated in a facial expression must roughly follow the same muscular animation as those in the other side of the face.

4D Imaging, Mohamed Daoudi, IMT Lille Douai

Scans of a smiling face: a) pre-operation, b) post-operation, c) control face.

 

Time: an essential fourth dimension in analysis

This technology is particularly well-suited to studying facial paralysis, as it takes time into account, and therefore the face’s dynamics. Dynamic analysis provides additional information. “When we look at a photo, it is sometimes impossible to detect facial paralysis. The face moves in three dimensions, and the paralysis is revealed with movement”, explains Mohamed Daoudi.

The researcher uses non-invasive technology to model the dynamics: a structured-light scanner. How does it work? A grid of light stripes is projected onto the face. This gives a face in 3D, depicted by a cloud of around 20,000 dots. Next, a sequence of images of the face making facial expressions is recorded at 15 images per second. The frames are then studied using an algorithm which calculates the deformation observed in each dot. The two sides of the face are then superimposed for comparison.

 

4D Imaging, Mohamed Daoudi, IMT Lille Douai

Series of facial expressions made during the scan.

 

Making 4D technology more readily available

Until present, this 4D imaging technique has been tested on a small number of patients between 16 and 70 years old. They have all tolerated it well. Doctors have also been satisfied with the results. They are now looking at having the technology statistically validated, in order to develop it on a larger scale. However, the equipment required is expensive. It also requires substantial human resources to acquire the images and the resulting analyses.

For Mohamed Daoudi, the project’s future lies in simplifying the technology with low-cost 3D capture systems, but other perspectives could also prove interesting. “Only one medical service in the Hauts-de-France region offers this approach, and many people come from afar to use it. In the future, we could imagine remote treatment, where all you would need is a computer and a tool like the Kinect. Another interesting market would be smartphones. Depth cameras which provide 3D images are beginning to appear on these devices, as well as tablets. Although the image quality is not yet optimal, I am sure it will improve quickly. This type of technology would be a good way of making the technology we developed more accessible”.

 

[1] Mohamed Daoudi is head of the 3D SAM team at the CRIStAL laboratory (UMR 9189). CRIStAL (Research center in Computer Science, Signal and Automatic Control of Lille) is a laboratory (UMR 9189) of the National Center for Scientific Research, University Lille 1 and Centrale Lille in partnership with University Lille 3, Inria and Institut Mines-Télécom (IMT).

[2] This project was supported by Fondation de l’avenir

 

 

 

EUROP platform, Carnot TSN, Carnot Télécom & Société numérique, Télécom Saint-Étienne

EUROP: Testing an Operator Network in a Single Room

Belles histoires, bouton, CarnotThe next article in our series on the Carnot Télécom technology platforms and digital society, with EUROP (Exchanges and Usages for Operator Networks) at Télécom Saint-Étienne. This platform offers physical testing of network configurations, to meet service providers’ needs. We discussed the platform with its director, Jacques Fayolle, and assignment manager Maxime Joncoux.

 

What is EUROP?

Jacques Fayolle: The EUROP platform was designed through a dual partnership between the Conseil Départemental de la Loire and LOTIM, a local telecommunications company and subsidiary of the Axione group. EUROP brings together researchers and engineers from Télécom Saint-Étienne, specialized in networks and telecommunications and in computer science. We are seeing an increasing convergence between infrastructure and the services that use this infrastructure. This is why these two skillsets are complementary.

The goal of the platform is to simulate an operator network in a single room. To do so, we reconstructed a full network, from the production of services to consumption by a client company or an individual. This enabled us to depict every step in the distribution chain of a network, up to the home.

 

What technology is available on the platform?

JF: We are currently using wired technologies, which make up the operator part of a fiber network. We are particularly interested in being able to compare the usage of a service according to the protocols used as the signal is transferred from the server to the final customer. For instance, we can study what happens in a housing estate when an ADSL connection is replaced by FTTH fiber optics (Fiber to the Home).

The platform’s technology evolves, but the platform as a whole never changes. All we do is add new possibilities, because what we want to do is compare technologies with each other. A telecommunications system has a lifecycle of 5 to10 years. At the beginning, we mostly used copper technology, then we added fiber point to point, then point to multi-point. This means that we now have several dozen different technologies on the platform. This roughly corresponds to all technology currently used by telecommunications operators.

Maxime Joncoux: And they are all operational. The goal is to test the technical configurations in order to understand how a particular type of technology works, according to the physical layout of the network we put in place.

 

How can a network be represented in one room?

MJ: The network is very big, but in fact it fits into a small space. If we take the example of Saint-Étienne, this network fits into a large building, but it covers all communications in the city. This represents around 100,000 copper cables that have been reduced. Instead of having 30 connections, we only have one or two. As for the 80 kilometers of fiber in this network, they are simply wound around a coil.

JF: We also have distance simulators, objects that we can configure according to the distance we want to represent. Thanks to this technology, we can reproduce a real high-speed broadband or ADSL network. This enables us to look at, for example, how a service will be consumed depending on whether we have access to a high-speed broadband network, for example in the center of Paris, or in an isolated area in the countryside, where the speed might be slower. EUROP allows us to physically test these networks, rather than using IT models.

It is not a simulation, but a real laboratory reproduction. We can set up scenarios to analyze and compare a situation with other configurations. We can therefore directly assess the potential impact of a change in technology across the chain of a service proposed by an operator.

 

Who is the platform for?

JF: We are directly targeting companies that want to innovate with the platform, either by validating service configurations or by assessing the evolution of a particular piece of equipment in order to achieve better-quality service or faster speed. The platform is also used directly by the school as a learning and research tool. Finally, it allows us to raise awareness among local officials in rural areas about how increasing bandwidth can be a way of improving their local economy.

MJ: For local officials, we aim to provide a practical guide on standardized fiber deployment. The goal is not for Paris and Lyon to have fiber within five years while the rest of France still uses ADSL.

 

EUROP platform, Télécom Saint-Étienne

EUROP platform. Credits: Télécom Saint-Étienne

 

Could you provide some examples of partnerships?

JF: We carried out a study for Adista, a local telecommunications operator. They presented the network load they needed to bear for an event of national stature. Our role was to determine the necessary configuration to meet their needs.

We also have a partnership with IFOTEC, an SME creating innovative networks near Grenoble. We worked together to provide high-speed broadband access in difficult geographical areas. That is, where the distance to the network node is greater than it is in cities. The SME has created DSL offset techniques (part of the connection uses copper, but there is fiber at the end) which provides at least 20 Mb 80 kilometers away from the network node. This is the type of industrial companies we are aiming to innovate with, looking for innovative protocols or hardware.

 

What does the Carnot accreditation bring you?

JF: The Carnot label gives us visibility. SMEs are always a little hesitant in collaborating with academics. This label brings us credibility. In addition, the associated quality charter gives our contracts more substance.

 

What is the future of the platform?

JF: Our goal is to shift towards Openstack[1] technology, which is used in large data centers. The aim is to launch into Big Data and Cloud Computing. Many companies are wondering about how to operate their services in cloud mode. We are also looking into setting up configuration systems that are adapted to the Internet of Things. This technology requires an efficient network. The EUROP platform enables us to validate the necessary configurations.

 

 [1] platform based on open-source software for cloud computing

[box type=”shadow” align=”” class=”” width=””]

The TSN Carnot institute, a guarantee of excellence in partnership-based research since 2006

 

Having first received the Carnot label in 2006, the Télécom & Société numérique Carnot institute is the first national “Information and Communication Science and Technology” Carnot institute. Home to over 2,000 researchers, it is focused on the technical, economic and social implications of the digital transition. In 2016, the Carnot label was renewed for the second consecutive time, demonstrating the quality of the innovations produced through the collaborations between researchers and companies. The institute encompasses Télécom ParisTech, IMT Atlantique, Télécom SudParis, Télécom, École de Management, Eurecom, Télécom Physique Strasbourg and Télécom Saint-Étienne, École Polytechnique (Lix and CMAP laboratories), Strate École de Design and Femto Engineering.[/box]

Tunisian revolution

Saving the digital remains of the Tunisian revolution

On March 11 2017, the National Archives of Tunisia received a collection of documentary resources on the Tunisian revolution which took place from December 17, 2010 to January 14, 2011. Launched by Jean-Marc Salmon, a sociologist and associate researcher at Télécom École de Management, the collection was assembled by academics and associations and ensures the protection of numerous digital resources posted on facebook. These resources are now recognized as a driving force in the uprising. Beyond serving as a means for rememberance and historiography, this initiative is representative of the challenges digital technology poses in archiving the history of contemporary societies.

 

[dropcap]A[/dropcap]t approximately 11 am on December 17, 2010 in the city of Sidi Bouzid, in the heart of Tunisia, the young fruit vendor Mohammed Bouazizi had his goods confiscated by a police officer. For him, it was yet another example of abuse by the authoritarian regime of Zine el-Abidine Ben Ali, who had ruled the country for over 23 years. Tired of the repression of which he and his fellow Tunisians were victims, he set himself on fire in front of the prefecture of the city later that day. Riots quickly broke out in the streets of Sidi Bouzid, filmed by inhabitants. The revolution was just getting underway, and videos taken with rioters’ smartphones would play a crucial role in its escalation and the dissemination of protests against the regime.

Though the Tunisian government blocked access to the majority of image banks and videos posted online — and proxies* were difficult to implement in order to bypass these restrictions — Facebook was wide open. The social networking site gave protestors an opportunity to alert local and foreign news channels. Facebook groups were organized in order to collect and transmit images. Protestors fought for the event to be represented in the media by providing images taken with their smartphones to television channels France 24 and Al Jazeera. Their desired goal was clearly to have a mass effect: Al Jazeera is watched by 10 to 15% of Tunisians.

“The Ben Ali government was forced to abandon its usual black-out tactic, since it was impossible to implement faced with the impact of Al Jazeera and France 24,” explains Jean-Marc Salmon, an associate researcher at Télécom École de Management and a member of IMT’s LASCO IdeaLab (See at the end of the article). For the sociologist who is specialized in this subject, “There was a connection between what was happening in the streets and online: the fight to spur on representation in through Facebook was the prolongation of the desire for recognition of the event in the streets.” It was this interconnection that prompted research as early as 2011, on the role of the internet in the Tunisian revolution —  the first of its kind to use social media as an instrument for political power.

Television channels had to adapt to these new video resources which they were no longer the first to broadcast, since they had previously been posted on the social network. As of the second day of rioting, a new interview format emerged: on the set, the reporter conducted a remote interview with a notable figure (a professor, lawyer, doctor, etc.) on site in Sidi Bouzid or surrounding cities where protests were gaining ground. A photograph of the interviewee’s face was displayed on half the screen while the other half aired images from smartphones which had been retrieved from Facebook. The interviewee provided live comments to explain what was being shown.

 

Tunisian revolution, LASCO IdeaLab, Télécom École de Management

On December 19, 2010 “Moisson Maghrébine” — Al Jazeera’s 9 pm newscast — conducted a live interview with Ali Bouazizi, a local director of the opposition party. At the same time, images of the first uprisings which had taken place the two previous nights were aired, such as this one showing the burning of a car belonging to RCD, the party of president Ben Ali. The above image is a screenshot from the program.

 

Popularized by media covering the Tunisian revolution, this format has now become the media standard for reporting on a number of events (natural disasters, terrorist attacks, etc.) for which journalists do not have videos filmed by their own means. For Jean-Marc Salmon, it was the “extremely modern aspect of opposition party members” which made it possible to create this new relationship between social networking sites and mass media. “What the people of Sidi Bouzid understood was that there is a digital continuum: people filmed what was happening with their telephones and immediately thought, ‘we have to put these videos on Al Jazeera.’

 

Protecting the videos in order to preserve their legacy

Given the central role they played during the 29 days of the revolution, these amateur videos have significant value for current and future historians and sociologists. But, as a few years have passed since the revolution, they are no longer consulted as much as they were during the events of 2010. The people who put them online no longer see the point of leaving them there so they are disappearing. “In 2015, when I was in Tunisia to carry out my research on the revolution, I noticed that I could no longer find certain images or videos that I had previously been able to access,” explains the sociologist. “For instance, in an article on the France 24 website the text was still there but the associated video was no longer accessible, since the YouTube account used to post it online had been deleted.”

The research and archiving work was launched by Jean-Marc Salmon and carried out by the University of La Manouba under the supervision of the National Archives of Tunisia, with assistance from the Euro-Mediterranean Foundation of Support for Human Rights Defenders. The teams participating in this collaboration spent one year travelling throughout Tunisia in order to find the administrators of Facebook pages, amateur filmmakers, and members of the opposition party who appeared in the videos. The researchers were able to gather over a thousand videos and dozens of testimonies. This collection of documentary resources was handed over to the National Archives of Tunisia on March 17 of this year.

The process exemplifies new questions facing historians of revolutions. Up to now, their primary sources have usually consisted of leaflets published by activists, police reports or newspapers with clearly identified authors and cited sources. Information is cross-checked or analyzed in the context of its author’s viewpoint in order to express uncertainties. With videos, it is more difficult to classify information. What is its aim? In the case of the Tunisian revolution, is it an opponent of the regime trying to convey a political message, or simply a bystander filming the scene? Is the video even authentic?

To respond to these questions, historians and archivists must trace back the channels through which the videos were originally broadcast in order to find the initial rushes. Because each edited version expresses a choice. A video taken by a member of the opposition party in the street will take on different value when it is picked up by a television channel which has extracted it from Facebook and edited for the news. “It is essential that we find the original document in order to understand the battle of representation, starting with the implicit message of those who filmed the scene,” says Jean-Marc Salmon.

However, it is sometimes difficult to trace videos back to the primary resources. The sociologist admits that he has encountered anonymous users who posted videos on YouTube under pseudonyms or used false names to send videos to administrators of pages. In this case, he has had to make do with the earliest edited versions of videos.

This collection of documentary resources should, however, facilitate further efforts to find and question some of the people who made the videos: “The archives will be open to universities so that historians may consult them,” says Jean-Marc Salmon. The research community working on this subject should gradually increase our knowledge about the recovered documents.

Another cause for optimism is the fact that the individuals questioned by researchers while establishing the collection were quite enthusiastic about the idea of contributing to furthering knowledge about the Tunisian revolution. “People are interested in archiving because they feel that they have taken part in something historic, and they don’t want it to be forgotten,” notes Jean-Marc Salmon.

The future use of this collection will undoubtedly be scrutinized by researchers, as it represents a first in the archiving of natively digital documents. The Télécom École de Management researcher also views it as an experimental laboratory: “Since practically no written documents were produced during the twenty-nine days of the Tunisian Revolution, these archives bring us face-to-face with the reality of our society in 30 or 40 years, when the only remnants of our history will be digital.”

 

*Proxies are staging servers used to access a network which is normally inaccessible.

[divider style=”normal” top=”20″ bottom=”20″]

LASCO, an idea laboratory for examining how meaning emerges in the digital era

Jean-Marc Salmon carried out his work on the Tunisian revolution with IMT’s social sciences innovation laboratory (LASCO IdeaLab), run by Pierre-Antoine Chardel, a researcher at Télécom École de Management. This laboratory serves as an original platform for collaborations between the social sciences research community, and the sectors of digital technology and industrial innovation. Its primary scientific mission is to analyze the conditions under which meaning emerges at a time when subjectivities, interpersonal relationships, organizations and political spaces are subject to significant shifts, in particular with the expansion of digital technology and with the globalization of certain economic models. LASCO brings together researchers from various institutions such as the universities of Paris Diderot, Paris Descartes, Picardie Jules Verne, the Sorbonne, HEC, ENS Lyon and foreign universities including Laval and Vancouver (Canada), as well as Villanova (USA).

[divider style=”normal” top=”20″ bottom=”20″]

 

Patrice Pajusco, IMT Atlantique, 5G

5G Will Also Consume Less Energy

5G is often presented as a faster technology, as it will need to support broadband mobile usage as well as communication between connected objects. But it will also have to use less energy in order to find its place within the current context of environmental transition. This is the goal of the ANR Trimaran and Spatial Modulation projects, run by Orange, and in association with IMT Atlantique and other academic actors.

 

Although it will not become a reality until around 2020, 5G is being very actively researched. Scientists and economic stakeholders are really buzzing about this fifth-generation mobile telephony technology. One of the researchers’ goals is to reduce the energy consumption of 5G communication. The stakes are high, as the development of this technology aims to be coherent with the general context of energy and environmental transition. In 2015, the alliance for next generation mobile networks, (NGMN) estimated thatin the next ten years, 5G will have to support a one thousand-fold increase in data traffic, with lower energy consumption than today’s networks.” This is a huge challenge, as it means increasing the energy efficiency of mobile networks by 2,000.

To achieve this, stakeholders are counting on the principle of focusing communication. The idea is simple: instead of transmitting a wave from an antenna in all directions, as is currently the case, it is more economical to send it towards the receiver. Focusing waves is not an especially new field of research. However, it was only recently applied to mobile communications. The ANR Trimaran project, coordinated by Orange and involving several academic and industrial partners[1] including IMT Atlantique, explored the solution between 2011 and 2014. Last November, Trimaran won the “Economic Impact” award at the ANR Digital Technology Meetings.

Also read on I’MTech: Research and Economic Impacts: “Intelligent Together”

 

In order to successfully focus a wave between the antenna and a mobile object, the group’s researchers have concentrated on a technique of time reversal: “the idea is to use a mathematical property: a solution to wave equations is correct, whether the time value is positive or negative” says Patrice Pajusco, telecommunications researcher at IMT Atlantique. He explains with an illustration: “take a drop of water. If you drop it onto a lake, it will create a ripple that will spread to the edges. If you reproduce the ripple at the edge of the lake, you can create a wave that will converge towards the point in the lake where the drop of water fell. The same phenomena will happen again on the lake’s surface, but with a reversed time value.

When applied to 5G, the principle of time reversal uses an initial transmission from the mobile terminal to the antenna. The mobile terminal transmits its position, sends an electromagnetic wave which spreads through the air, over hills, and is defined by the terrain, before arriving at the antenna with its echoes, and a specific profile of its journey. The antenna recognizes the profile and can send it in the opposite direction to meet the user’s terminal. The team at IMT Atlantique is especially involved in modeling and characterizing the communication channel that is created. “The physical properties of the propagation channel vary according to the echoes coming from several directions which are spaced more or less differently. They must be well-defined in order for the design of the communication system to be effective” Patrice Pajusco underlines.

 

Focusing also depends on antennas

Improving this technique also involves working on the antennas used. Focusing a wave on a standard antenna is not difficult, but focusing in on a specific antenna when there is another one nearby is problematic. The antennas must be spaced out for the technique to work. To avoid this, one of the partners in the project is working on new types of micro-structured antennas which make it possible to focus a signal over a shorter distance, therefore limiting the spacing constraint.

The challenge of focusing is so important that since January 2016, most of the partners in the Trimaran project have been working on a new ANR project called spatial modulation. “The idea of this new project is to continue to save energy, while transmitting additional information to the antennas”, Patrice Pajusco explains. Insofar as it is possible to focus on a specific antenna, this connection with the terminal represents information. “We will therefore be able to transmit several bits of information, simply by changing the focus of the antenna”, the researcher explains.

This new project brings on an additional partner, Centrale Supélec, an “expert in the field of spatial modulation”, Patrice Pajusco says. If conclusive, it could eventually provide a technology to compete with the MIMO antennas based on the use of many emitters and receivers to transmit a signal. “By using spatial modulation and focusing, we could have a solution that would be much less complex that the conventional MIMO system”, the researcher hopes. Focusing clearly has the capacity to bring great value to 5G. The fact that it can be applied to moving vehicles has already been judged as one of the most promising techniques by the H2020 METIS project, a reference in the European public-private partnership for 5G.

 

[1] The partners are Orange, Thalès, ATOS, Institut Langevin, CNRS, ESPCI Paris, INSA Rennes, IETR, and IMT Atlantique (formerly Télécom Bretagne and Mines Nantes).

AutoMat, Télécom ParisTech

With AutoMat, Europe hopes to adopt a marketplace for data from connected vehicles

Projets européens H2020Data collected by communicating vehicles represent a goldmine for providers of new services. But in order to buy and sell this data, economic players need a dedicated marketplace. Since 2015, the AutoMat H2020 project has been developing such an exchange platform. To achieve this mission by 2018 — the end date for the project— a viable business model will have to be defined. Researchers at Télécom Paristech, a partner in the project, are currently tackling this task.

 

Four wheels, an engine, probably a battery, and most of all, an enormous quantity of data generated and transmitted every second. There is little doubt that in the future, which is closer than we may think, cars will be intelligent and communicating. And beyond recording driving parameters to facilitate maintenance, or transmitting information to improve road safety, the data acquired by our vehicles will represent a market opportunity for third-party services.

But in order to create these new services, a secure platform for selling and buying data must still be developed, with sufficient volume to be attractive. This is the objective that the AutoMat  project — began in April 2015 and funded by the H2020 European research programme — is trying to achieve by developing a marketplace prototype.

The list of project members includes two service providers: MeteoGroup and Here, companies which specialize in weather forecasts and mapping respectively. For these two stakeholders, data from cars will only be valuable if it comes from many different manufacturers. For MeteoGroup, the purpose of using vehicles as weather sensors is to have access to real-time information about temperatures or weather conditions nearly anywhere in Europe. But a single brand would not have a sufficient number of cars to be able to provide this much information: therefore data from several manufacturers must be aggregated. This is no easy task since, for historical reasons, each one has its own unique format for storing data.

 

AutoMat, Télécom ParisTech, communicating vehicles, connected cars

Data from communicating cars could, for example, optimize meteorological measurements by using vehicles as sensors.

 

To simplify this task without giving anyone an advantage, the technical university of Dortmund is participating in the project by defining a new model with a standard data format agreed upon by all parties. This, however, requires automobile manufacturers to change their processes in order to integrate a data formatting phase. But the cost of this adaptation is marginal compared to the great potential value of their data combined with that of their competitors. The Renault and Volkswagen groups, as well as the Fiat research centre are partners in the AutoMat project in order to identify how to tap into the underlying economic potential.

 

What sort of business model?

In reality, it is less difficult to convince manufacturers than it is to find a business model for the marketplace prototype. This is why Télécom ParisTech’s Economics and Social Sciences Department (SES) is contributing to the AutoMat project. Giulia Marcocchia, a PhD student in Management Sciences who is working on the project, describes different aspects which must be taken into consideration:

“We are currently carrying out experiments on user cases, but the required business model is unique so it takes time to define. Up until now, manufacturers have used data transmitted by cars to optimize maintenance or reduce life cycle costs. In other sectors, there are marketplaces for selling data by packets or on a subscription basis to users clearly identified as either intermediary companies or final consumers.
But in the case of a marketplace for aggregated data from cars, the users are not as clearly defined: economic players interested in this data will only be discovered upon the definition of the platform and the ecosystem taking shape around connected cars.”

For researchers in the SES department, this is the whole challenge: studying how a new market is created. To do so, they have adopted an effectual approach. Valérie Fernandez, an innovation management researcher and director of the department, describes this method as one in which “the business model tool is not used to analyze a market, but rather as a tool to foster dialogue between stakeholders in different sectors of activity, with the aim of creating a market which does not currently exist.”

The approach focuses on users: what do they expect from the product and how will they use it? This concerns automobile manufacturers who supply the platform with the data they collect as much as service providers who buy this data. “We have a genuine anthropological perspective for studying these users because they are complex and multifaceted,” says Valérie Fernandez. “Manufacturers become producers of data but also potential users, which is a new role for them in a two-sided market logic.”

The same is true for drivers, who are potential final users of the new services generated and may also have ownership rights for data acquired by vehicles they drive. From a legal standpoint nothing has been determined yet and the issue is currently being debated at the European level. But regardless of the outcome, “The marketplace established by AutoMat will incorporate questions about drivers’ ownership of data,” assures Giulia Marcocchia.

The project runs until March 2018. In its final year, different use cases should make it possible to define a business model that responds to questions relating to uses by different users. Should it fulfill its objective, AutoMat will represent a useful tool for developing intelligent vehicles in Europe.

[divider style=”normal” top=”20″ bottom=”5″] 

Ensuring a secure, independent marketplace

In addition to the partners mentioned in the article above, the AutoMat project brings together stakeholders responsible for securing the marketplace and handling its governance. Atos is in charge of the platform, from its design to data analysis in order to help identify the market’s potential. Two partners, ERPC and Trialog, are also involved in key aspects of developing the marketplace: cyber-security and confidentiality. Software systems engineering support for the various parties involved is ensured by ATB, a non-profit research organization.

[divider style=”normal” top=”20″ bottom=”20″] 

Digital social innovations

What are digital social innovations?

Müge Ozman, Télécom École de Management – Institut Mines-Télécom and Cédric Gossart, Institut Mines-Télécom (IMT)

One of the problems that we encounter in our research on digital social innovation (DSI) is related with defining it. Is it a catch-all phrase? A combination of three trendy words? Digital social innovations (DSI) are often associated with positive meanings, like openness, collaboration or inclusion, as opposed to more commercially oriented innovations. In trying to define such a contested concept as digital social innovation, we should strive to disentangle it from its positive aura.

The following figure is helpful for a start. Digital social innovation lies at the intersection of three spheres: innovation, social and environmental problems, and digital technologies.

Authors’ own.

The first sphere is innovation. It refers to the development and diffusion of a (technological, social…) novelty that is not used yet in the market or sector or country where it is being introduced. The second sphere concerns the solutions put in place to address social and environmental problems, for example through public policies, research projects, new practices, civil society actions, business activities, or by decentralising the distribution of power and resources through social movements. For example, social inclusion measures facilitate, enable and open up channels for people to participate in social life, regardless of their age, sex, disability, race, ethnicity, origin, religion or socioeconomic status (e.g. the positive discrimination measures that enable minority students to enter universities). Finally, the third sphere relates to digital technologies, which concern hardware and software technologies used to collect, process, and diffuse information.

 

From innovative ideas to diffused practices

Many digital technologies are no longer considered innovations in 2017, at least in Europe, where they have become mainstream. For example, according to [Eurostat] only 15 % of the EU population do not have access to the Internet. On the other hand, some digital technologies are novel ones (area C in the figure), such as the service Victor & Charles, which enables hotel managers to access the social-media profile of their clients in order to best meet their needs.

As regards the yellow sphere, many of its solutions to social and environmental problems are neither digital nor innovative. They relate to more traditional ways of fighting social exclusion or pollution, for example. To solve housing problems in France, the HLM system (habitations à loyer modéré) was introduced after the World War II to provide subsidised housing to low-income households. When introduced it was an innovative solution, but it has now been institutionalised.

At the intersection between the solutions and digital technologies we find the area B which does not intersect with the blue innovation sphere. There we find digital solutions to social and environmental problems which are not innovative, such as the monthly electronic newsletter Atouts from OPH (Fédération nationale des Offices Publics de l’Habitat), a federation of institutions in charge of the HLM system, which uses the newsletter to foster best practices among HLM agencies in France. We also find innovations that aim to solve social and environmental problems which are not digital (area A). For example, the French start-up Baluchon builds affordable wooden and DIY micro-houses that enable low-income people to live independently. As for area C, it concerns innovative digital technologies which do not aim to solve a social or environmental problem, such as a 3D tablet.

 

Using digital technologies to address real-world problems

In the area where the three spheres intersect lie digital social innovations. DSI can thus be defined as novelties that use, develop, or rely on digital technologies to address social and/or environmental problems. They include a broad group of digital platforms which facilitate peer-to-peer interactions and the mobilisation of people in order to solve social and/or environmental problems. Neighbourhood information systems, civic engagement platforms, volunteered geographic information systems, crowdfunding platforms for sustainability or social issues, are some of the cases of the DSI area.

For example, the Ushahidi application, designed to map violent acts following the 2008 elections in Kenya, aggregates and diffuses information collected by citizens about urban violence, which enables citizens and local authorities to take precautionary measures. As for the I Wheel Share application, it facilitates the collection and diffusion of information about urban (positive and negative) experiences that may be useful to disabled people. Two other examples involve the use of a digital hardware (other than a smartphone). First, KoomBook, created by the NGO Librarians Without Borders, is an electronic box using a wifi hotspot to provide key educational resources to people deprived of Internet access. Second, the portable sensor developed by the Plume Labs company, which can be used as a key holder, measures local air pollution in real time and diffuses collected data to the community.

 

Theoretical clarity, practical imprecision

But as it always happens with categorisations, boundaries are not as clear-cut as it may seem on a figure. In our case, there is a grey area surrounding digital social innovations. For example, if a technology makes it easier for a lot of people to access certain goods or services (short-term recreational housing, individual urban mobility…), does it solve a social problem? The answer is clouded with the positive meaning attached to digital innovations, which can conflict with their possible negative social and environmental impacts (e.g., they might generate unfair competition or strong rebound effects).

Take the case of Airbnb: according to our definition, it could be considered a digital social innovation. It relies on a digital platform through which a traveller can find cheaper accommodations while possibly discovering local people and lifestyles. Besides avoiding the anonymity of hotels, tailored services are now offered to clients of the platform. Do you want to take a koto course while having your matcha tea in a Japanese culture house? This Airbnb “experience” will cost you 63 euros. Airbnb enables (some) people to earn extra income.

www.airbnb.com

 

But the system can also cause the loss of established capabilities and knowledge, and exclude locals who may not have the necessary digital literacy (neither lodgings located in central urban areas). While Airbnb customers might enjoy the wide range of offers available on the platform as well as local cultural highlights sold in a two-hour pack, an unknown and ignored local culture lies on the poor side of the digital (and economic) divide.

 

Measuring the social impact

Without having robust indicators of the social impact of DSI, it is difficult to clarify this grey area and to solve the problem of definition. But constructing ex-ante and ex-post indicators of social impact is not easy from a scientific point of view. Moreover it is difficult to obtain user data as firms intentionally keep them proprietary, impeding research. In addition, innovators and other ecosystem members can engage in “share-washing”, concealing commercial activities behind a smokescreen of socially beneficial activities. An important step towards overcoming these difficulties is to foster an open debate about how profits obtained from DSI are distributed, about who is excluded from using DSI and why, and about the contextual factors that ultimately shape DSI social impacts.

The ConversationAs troublesome as definition issues may be, researchers should not reject the term altogether for being too vague, since DSI can have a strong transformative power regarding empowerment and sustainability. But neither should they impose a restrictive categorisation of DSI, in which Uber and Airbnb have no place. The involvement of a broad variety of actors (users and nonusers, for-profit and not-for-profit…) in the definition of this public construct would do justice to the positive reputation of DSI.

 

Müge Ozman, Professor of Management, Télécom École de Management – Institut Mines-Télécom et Cédric Gossart, Maître de conférences , Institut Mines-Télécom (IMT)

La version originale de cet article a été publiée sur The Conversation.

Enhanced humans

Technologically enhanced humans: a look behind the myth

What exactly do we mean by an “enhanced” human? When this possibility is brought up, what is generally being referred to is the addition of human and machine-based performances (expanding on the figure of the cyborg popularized by science fiction). But enhanced in relation to what? According to which reference values and criteria? How, for example, can happiness be measured? A good life? Sensations, like smells or touch which connect us to the world? How happy we feel when we are working? All these dimensions that make life worth living. We must be careful here not to give in to the magic of figures. A plus can hide a minus; something gained may conceal something lost. What is gained or lost, however, is difficult to identify as it is neither quantifiable nor measurable.

Pilots of military drones, for example, are enhanced in that they use remote sensors, optronics, and infrared cameras, enabling them to observe much more than could ever be seen with the human eye alone. But what about the prestige of harnessing the power of a machine, the sensations and thrill of flying, the courage and sense of pride gained by overcoming one’s fear and mastering it through long, tedious labor?

Another example taken from a different context is that of telemedecine and remote diagnosis. Seen from one angle, it creates the possibility of benefitting from the opinion of an expert specialist right from your own home, wherever it is located. For isolated individuals who are losing independence and mobility, or for regions that have been turned into medical deserts, this represents a real advantage and undeniable progress. However, field studies have shown that some people are worried that it may be a new way of being shut off from the world and confined to one’s home. Going to see a specialist, even one who is located far away, forces individuals to leave their everyday environments, change their routines and meet new people. It therefore represents an opportunity for new experiences, and to a certain extent, leads to greater personal enrichment (another possible definition for enhancement).

 

Enhanced humans

Telemedecine consultation. Intel Free Press/Wikimedia, CC BY-SA

How technology is transforming us

Of course, every new form of progress comes with its share of abandonment of former ways of doing and being, habits and habitus. What is most important is that the sum of all gains outweighs that of all losses and that new feelings replace old ones. Except this economic and market-based approach places qualitatively disparate realities on the same level: that of usefulness. And yet, there are things which are completely useless —devoting time to listening, wasting time, wandering about — which seem to be essential in terms of social relations, life experiences, learning, imagination, creation etc. Therefore, the issue is not knowing whether or not machines will eventually replace humans, but rather, understanding the values we place in machines, values which will, in turn, transform us: speed, predictability, regularity, strength etc.

The repetitive use of geolocation, for example, is making us dependent on this technology. More worryingly, our increasing reliance on this technology is insidiously changing our everyday interactions with others in public or shared places. Are we not becoming less tolerant of the imperfections of human beings, of the inherent uncertainty of human relationships, and also more impatient in some ways? One of the risks I see here is that in the most ordinary situations, we will eventually expect human beings to behave with the same regularity, precision, velocity and even the same predictability as machines. Is this shift not already underway, as illustrated by the fact that it has become increasingly difficult for us to talk to someone passing by, to ask a stranger for directions, preferring the precise, rapid solution displayed on the screen of our iPhone to this exchange, which is full of unpredictability and in some ways, risk? These are the questions we must ask ourselves when we talk about “enhanced humans.”

Consequently, we must also pay particular attention to the idea that, as we get used to machines’ binary efficiency and lack of nuance, it will become “natural” for us and as a result, human weakness will become increasingly intolerable and foreign. The issue, therefore, is not knowing whether machines will overthrow humans, take our place, surpass us or even make us obsolete, but rather understanding under what circumstances — social, political, ethical, economic — human beings start acting like machines and striving to resemble the machines they design. This question, of humans acting like machines which is implicit in this form of behavior, strikes me as both crucial and pressing.

 

Interacting with machines is more reassuring

It is true that with so-called social or “companion” robots  (like  Paro, NaoNurseBotBaoAiboMy Real Baby) in whom we hope to see figures, capable not only of communicating with us, acting in our everyday familiar environments, but also of demonstrating emotions, learning, empathy etc. the perspective seems to be reversed. Psychologist and anthropologist Sherry Turkle has studied this shift in thinking of robots as frightening and strange to thinking about them as potential friends.  What happened, she wondered, to make us ready to welcome robots into our everyday lives and even want to create emotional attachments with them when only yesterday they inspired fear or anxiety?

 

Enhanced Humans, Gérard Dubey

Korean robot, 2013. Kiro-M5, Korea Institute of Robot and Convergence

 

After several years studying nursing homes which had chosen to introduce these machines, the author of Alone together concluded that one of the reasons why people sometimes prefer the company of machines to that of humans is the prior deterioration of relationships which they may have experienced in the real world. Hallmarks of these relationships are distrust, fear of being deceived and suspicion. Turkle also cites a certain fatigue from always having to be on guard, as well as boredom: being in others’ company bores us. She deduces that the concept of social robots suggests that our way of facing intimacy may now be reduced to avoiding it altogether. According to her, this deterioration of human relationships represents the foundation and condition for developing social robots, which respond to a need for a stable environment, fixed reference points, certainty and predictably seldom offered by normal relationships in today’s context of widespread deregulation.

It is as if we expect our “controlled and controllable” relationships with machines to make up for the helplessness we sometimes feel, when faced with the injustice and cruelty reserved for entire categories of living beings (humans and non-humans, when we think of refugees, the homeless or animals used for industry). A solution of withdrawal, or a sort of refuge, but one which affects how we see ourselves in the world, or rather outside the world, without any real way to act upon it.

[divider style=”normal” top=”5″ bottom=”5″]

 

Gérard Dubey, Sociologist, Télécom École de Management, Institut Mines-Télécom
This article was originally published in French in The Conversation France