human

Production line flexibility: human operators to the rescue!

Changing customer needs have cast a veil of uncertainty over the future of industrial production. To respond to these demands, production systems must be flexible. Although industry is becoming increasingly automated, a good way to provide flexibility is to reintroduce human operators. An observation that goes against current trends, presented by Xavier Delorme, an industrial management researcher at Mines Saint-Étienne at the IMT symposium on “Production Systems of the Future”.

This article is part of our series on “The future of production systems, between customization and sustainable development.”

 

Automation, digitization and robotization are concepts associated with our ideas about the industry of the future. With a history marked by technological and technical advances, industry is counting on autonomous machines that make it possible to produce more, faster. Yet, this sector is now facing another change: customer needs. A new focus on product customization is upending how production systems are organized. The automotive industry is a good example of this new problem. Until now, it has invested in production lines that would be used for ten to twenty years. But today, the industry has zero visibility on the models it will produce over such a period of time. A production system that remains unchanged for so long is no longer acceptable.

In order to meet a wide range of customer demands that impact many steps of their production, companies must have flexible manufacturing systems. “That means setting up a system that can evolve to respond to demands that have not yet been identified – flexibility – so that it can be adjusted by physically reconfiguring the system more or less extensively,” explains Xavier Delorme, a researcher at Mines Saint-Étienne. Flexibility can be provided through digital controls or reprogramming a machine, for example.

But in this increasingly machine-dominated environment, “another good way to provide flexibility is to reintroduce versatile human operators, who have an ability to adapt,” says the researcher. The primary aim of his work is to leverage each side’s strengths, while attempting to limit the weaknesses of the other side. He proposes software solutions to help design production lines and ensure that they run smoothly.

Versatility of human operators

This conclusion is based on field observations, in particular through collaboration with MBtech Group, in which the manufacturer drew attention to this problem. The advanced automation of its production lines was reducing versatility. The solution proposed by researchers: reintroduce human operators. “We realized that some French companies had conserved this valuable resource, although they were often behind in terms of automation. There’s a balance to be found between these two elements,” says Xavier Delorme. It appears that the best way to create value, in terms of economic efficiency and flexibility, is to combine robots and humans in a complementary manner.

A system that produces engines manufactures different models but does not need to be modified for each variant. It adapts to the product, switching almost instantaneously from one to another. However, the workload for different stations varies according to the model. This classic situation requires versatility. “A well-trained, versatile human operator reorganizes his work by himself. He repositions himself as needed at a given moment; this degree of autonomy doesn’t exist in current automated systems, which cannot be moved quickly from one part of the production line to another,” says Xavier Delorme.

This flexibility presents a twofold problem for companies. Treating an operator like a machine reduces his range of abilities which does result in efficiency. It is therefore in companies’ interest to enhance operators’ versatility through training and assigning them various tasks in different parts of the production system. But the risk of turnover and the loss of skills associated with short contracts and frequent changes in staff still remain.

The arduous working conditions of multifunctional employees must also not be overlooked. This issue is usually considered too late in the design process, leading to serious health problems and malfunctions in production systems. “That’s why we also focus on workstation ergonomics starting in the design stage,” explains Xavier Delorme. The biggest health risks are primarily physical: fatigue due to a poor position, repetitiveness of tasks etc. The versatility of human operators can reduce these risks, but it can also contribute to them. Indeed, the risks increase if an employee lacks experience and finds it difficult to carry out tasks at different workstations. Once again, the best solution is to find the right balance.

Educating SMEs about the industry of the future

Large corporations are already on their way to the industry of the future, but it’s more difficult for SMEs, says Xavier Delorme. In June 2018, Mines Saint-Étienne researchers launched IT’mFactory, an educational platform developed in partnership with the Union des industries et métiers de la métallurgie (Union of Metallurgy Industries). This demonstration tool makes it possible to connect with SMEs to discuss the challenges and possibilities presented by the industry of the future. It also provides an opportunity to address problems facing SMEs and direct them towards appropriate innovations.

Such interactions are precious in a time in which production methods are undergoing considerable changes (cloud manufacturing, additive manufacturing, etc.) The business models involved provide researchers with new challenges. And flexibility alone will not respond to the needs of customization — or how to produce by the unit and on demand. Servicization, which sells a service associated with a product, is also radically changing the ways in which companies have to be organized.

 

Article written by Anaïs Culot, for I’MTech.

 

algorithms

Restricting algorithms to limit their powers of discrimination

From music suggestions to help with medical diagnoses, population surveillance, university selection and professional recruitment, algorithms are everywhere, and transform our everyday lives. Sometimes, they lead us astray. At fault are the statistical, economic and cognitive biases inherent to the very nature of the current algorithms, which are supplied with massive data that may be incomplete or incorrect. However, there are solutions for reducing and correcting these biases. Stéphan Clémençon and David Bounie, Télécom ParisTech researchers in machine learning and economics, respectively, recently published a report on the current approaches and those which are under exploration.

 

Ethics and equity in algorithms are increasingly important issues for the scientific community. Algorithms are supplied with the data we give them including texts, images, videos and sounds, and they learn from these data through reinforcement. Their decisions are therefore based on subjective criteria: ours, and those of the data supplied. Some biases can thus be learned and accentuated by automated learning. This results in the algorithm deviating from what should be a neutral result, leading to potential discrimination based on origin, gender, age, financial situation, etc. In their report “Algorithms: bias, discrimination and fairness”, a cross-disciplinary team[1] of researchers at Télécom ParisTech and the University of Paris Nanterre investigated these biases. They asked the following basic questions: Why are algorithms likely to be distorted? Can these biases be avoided? If yes, how can we minimize them?

The authors of the report are categorical: algorithms are not neutral. On the one hand, because they are designed by humans. On the other hand, because “these biases partly occur because the learning data lacks representativity” explains David Bounie, researcher in economics at Télécom ParisTech and co-author of the report. For example: the recruitment algorithm for the giant Amazon was heavily criticized in 2015 for having discriminated against female applicants. At fault, was an imbalance in the history of the pre-existing data. The people recruited in the previous ten years were primarily men. The algorithm had therefore been trained by a gender-biased learning corpus. As the saying goes, “garbage in, garbage out”. In other words, if the input data is of poor quality, the output will be poor too.

Also read Algorithmic bias, discrimination and fairness

Stéphan Clémençon is a researcher in machine learning at Télécom Paristech and co-author of the report. For him, “this is one of the growing accusations made of artificial intelligence: the absence of control over the data acquisition process.” For the researchers, one way of introducing equity into algorithms is to contradict them. An analogy can be drawn with surveys: “In surveys, we ensure that the data are representative by using a controlled sample based on the known distribution of the general population” says Stéphan Clémençon.

Using statistics to make up for missing data

From employability to criminality or solvency, learning algorithms have a growing impact on decisions and human lives. These biases could be overcome by calculating the probability that an individual with certain characteristics is included in the sample. “We essentially need to understand why some groups of people are under-represented in the database” the researchers explain. Coming back to the example of Amazon, the algorithm favored applications from men because the recruitments made over the last ten years were primarily men. This bias could have been avoided by realizing that the likelihood of finding a woman in the data sample used was significantly lower than the distribution of women in the population.

“While this probability is not known, we need to be able to explain why an individual is in the database or not, according to additional characteristics” adds Stéphan Clémençon. For example, when assessing banking risk, algorithms use data on the people eligible for a loan at a particular bank to determine the borrower’s risk category. These algorithms do not look at applications by people who were refused a loan, who have not needed to borrow money or who obtained a loan in another bank. In particular, young people under 35 years old are systematically assessed as carrying a higher level of risk than their elders. Identifying these associated criteria would make it possible to correct the biases.

Controlling data also means looking at what researchers call “time drift”. By analyzing data over very short periods of time, an algorithm may not account for certain characteristics of the phenomenon being studied. It may also miss long-term trends. By limiting the duration of the study, it will not pick up on seasonal effects or breaks. However, some data must be analyzed on the fly as they are collected. In this case, when the time scale cannot be extended, it is essential to integrate equations describing potential developments in the phenomena analyzed, to compensate for the lack of data.

The difficult issue of equity in algorithms

Other than the possibility of using statistics, researchers are also looking at developing algorithmic equity. This means developing algorithms which meet equity criteria according to attributes protected under law such as ethnicity, gender or sexual orientation. As for statistical solutions, this means integrating constraints into the learning program. For example, it is possible to impose that the probability of a particular algorithmic result will be equal for all individuals belonging to a particular group. It is also possible to integrate independence between the result and a type of data, such as gender, income level, geographical location, etc.

But which equity rules should be adopted? For the controversial Parcoursup algorithm for higher education applications, several incompatibilities were raised. “Take the example of individual equity and group equity. If we consider only the criterion of individual equity, each student should have an equal chance at success. But this is incompatible with the criterion of group equity, which stipulates that admission rates should be equal for certain protected attributes, such as gender” says David Bounie. In other words, we cannot give an equal chance to all individuals regardless of their gender and, at the same time, apply criteria of gender equity. This example illustrates a concept familiar to researchers: the rules of equity contradict each other and are not universal. They depend on ethical and political values that are specific to individuals and societies.

There are complex, considerable challenges facing social acceptance of algorithms and AI. But it is essential to be able to look back through the algorithm’s decision chain in order to explain its results. “While this is perhaps not so important for film or music recommendations, it is an entirely different story for biometrics or medicine. Medical experts must be able to understand the results of an algorithm and refute them where necessary” says David Bounie. This has raised hopes of transparency in recent years, but is no more than wishful thinking. “The idea is to make algorithms public or restrict them in order to audit them for any potential difficulties” the researchers explain. However, these recommendations are likely to come up against trade secret and personal data ownership laws. Algorithms, like their data sets, remain fairly inaccessible. However, the need for transparency is fundamentally linked with that of responsibility. Algorithms amplify the biases that already exist in our societies. New approaches are required in order to track, identify and moderate them.

[1] The report (in French) Algorithms: bias, discrimination and equity was written by Patrice Bertail (University of Paris Nanterre), David Bounie, Stephan Clémençon and Patrick Waelbroeck (Télécom ParisTech), with the support of Fondation Abeona.

Article written for I’MTech by Anne-Sophie Boutaud

To learn more about this topic:

Ethics, an overlooked aspect of algorithms?

Ethical algorithms in health: a technological and societal challenge

Since the enthusiasm for AI in healthcare brought on by IBM’s Watson, many questions on bias and discrimination in algorithms have emerged. Photo: Wikimedia.

Ethical algorithms in health: a technological and societal challenge

The possibilities offered by algorithms and artificial intelligence in the healthcare field raise many questions. What risks do they pose? How can we ensure that they have a positive impact on the patient as an individual? What safeguards can be put in place to ensure that the values of our healthcare system are respected?

 

A few years ago, Watson, IBM’s supercomputer, turned to the health sector and particularly oncology. It has paved the way for hundreds of digital solutions, ranging from algorithms for analyzing radiology images to more complex programs designed to help physicians in their treatment decisions. Specialists agree that these tools will spark a revolution in medicine, but there are also some legitimate concerns. The CNIL, in its report on the ethical issues surrounding algorithms and artificial intelligence, stated that they “may cause bias, discrimination, and even forms of exclusion”.

In the field of bioethics, four basic principles were announced in 1978: justice, autonomy, beneficence and non-maleficence. These principles guide research on the ethical questions raised by new applications for digital technology. Christine Balagué, holder of the Smart Objects and Social Networks chair at the Institut Mines-Télécom Business School, highlights a pitfall, however: “the issue of ethics is tied to a culture’s values. China and France for example have not made the same choices in terms of individual freedom and privacy”. Regulations on algorithms and artificial intelligence may therefore not be universal.

However, we are currently living in a global system where there is no secure barrier to the dissemination of IT programs. The report made by the CCNE and the CERNA on digital technology and health suggests that the legislation imposed in France should not be so stringent as to restrict French research. This would come with the risk of pushing businesses in the healthcare sector towards digital solutions developed by other countries, with even less controlled safety and ethics criteria.

Bias, value judgments and discrimination

While some see algorithms as flawless, objective tools, Christine Balagué, who is also a member of CERNA and the DATAIA Institute highlights their weaknesses: “the relevance of the results of an algorithm depends on the information it receives in its learning process, the way it works, and the settings used”. Bias may be introduced at any of these stages.

Firstly, in the learning data: there may be an issue of representation, like for pharmacological studies, which are usually carried out on 20-40-year-old Caucasian men. The results establish the effectiveness and tolerance of the medicine for this population, but are not necessarily applicable to women, the elderly, etc. There may also be an issue of data quality: their precision and reliability are not necessarily consistent depending on the source.

Data processing, the “code”, also contains elements which are not neutral and may reproduce value judgments or discriminations made by their designers. “The developers do not necessarily have bad intentions, but they receive no training in these matters, and do not think of the implications of some of the choices they make in writing programs” explains Grazia Cecere, economics researcher at the Institut Mines-Télécom Business School.

Read on I’MTech: Ethics, an overlooked aspect of algorithms?

In the field of medical imaging, for example, determining an area may be a subject of contention: A medical expert will tend to want to classify uncertain images as “positive” so as to avoid missing a potential anomaly which could be cancer, which therefore increases the number of false-positives. On the contrary, a researcher will tend to maximize the relevance of their tool in favor of false-negatives. They do not have the same objectives, and the way data are processed will reflect this value judgment.

Security, loyalty and opacity

The security of medical databases is a hotly-debated subject, with the risk that algorithms may re-identify data made anonymous and may be used for malevolent or discriminatory purposes (by employers, insurance companies, etc.). But the security of health data also relies on individual awareness. “People do not necessarily realize that they are revealing critical information in their publications on social networks, or in their Google searches on an illness, a weight problem, etc.”, says Grazia Cecere.

Applications labeled for “health” purposes are often intrusive and gather data which could be sold on to potentially malevolent third parties. But the data collected will also be used for categorization by Google or Facebook algorithms. Indeed, the main purpose of these companies is not to provide objective, representative information, but rather to make profit. In order to maintain their audience, they need to show that audience what it wants to see.

The issue here is in the fairness of algorithms, as called for in France in 2016 in the law for a digital republic. “A number of studies have shown that there is discrimination in the type of results or content presented by algorithms, which effectively restricts issues to a particular social circle or a way of thinking. Anti-vaccination supporters, for example, will see a lot more publications in their favor” explains Grazia Cecere. These mechanisms are problematic, as they get in the way of public health and prevention messages, and the most at-risk populations are the most likely to miss out.

The opaque nature of deep learning algorithms is also an issue for debate and regulation. “Researchers have created a model for the spread of a virus such as Ebola in Africa. It appears to be effective. But does this mean that we can deactivate WHO surveillance networks, made up of local health professionals and epidemiologists who come at a significant cost, when no-one is able to explain the predictions of the model?” asks Christine Balagué.

Researchers from both hard sciences and social sciences and humanities are looking at how to make these technologies responsible. The goal is to be able to directly incorporate a program which will check that the algorithm is not corrupt and will respect the principles of bioethics. A sort of “responsible by design” technology, inspired by Asimov’s three laws of robotics.

 

Article initially written in French by Sarah Balfagon, for I’MTech.

Quelles sont les bonnes pratiques à adopter en matière de mixité pour inclure les femmes dans les milieux technologiques ?

In IT professions, diversity is all about inclusion, not exclusion

Ideas about diversity are often fallacious. Sociology has shown that women are seen as being responsible for their own inclusion in places where they are a minority. Chantal Morley is conducting research in this field at Institut Mines-Télécom Business School. She is especially interested in diversity in technological fields, whether they be companies, universities, engineering schools, etc. In this interview for I’MTech, she goes over the right approaches in promoting diversity, but also the wrong ones. 

 

You suggest that we should no longer approach the issue of diversity through a filter of exclusion, and instead through inclusion. What is the difference?

Chantal Morley: This idea comes from seeing the low impact of the measures taken in the last 20 years. They are aimed at women solely in the form of: “you must keep informed, you have to make an effort to be included”.  But men don’t have to make these efforts, and history tells us that at one point in time, women didn’t have to either. These calls and injunctions target women outside working spaces or territories: this is what we call the exclusion filter. The idea is that women are excluded and should solve the problem themselves. Thinking in terms of inclusion means looking at practices in companies, discussion spaces and education. It is about questioning equality mechanisms, attitudes and representations.

Read on I’MTech: Why women have become invisible in IT professions

In concrete terms, what difference will this make?

CM: The reason women do not enter IT professions is not because they are not interested or that they don’t make the effort, but because the field is a highly masculine one. By looking at what is going on inside an organization, we see that technical professions, from which women have long been excluded, affirm masculinity. Still today, there is a latent norm, often subconsciously activated, which tells us that a man will be more at ease with technical issues. Telling women that they are foreign to these issues, through small signs, contributes to upholding this norm. This is how we have ended up with a masculine technical culture in companies, schools and universities. This culture is constantly reinforced by everyday interactions – between students, with teachers, between teachers, in institutional communication. The impact of these interactions is even stronger when their socio-sexual nature goes unquestioned. This is why practices must be questioned, which implies looking at what is going on inside organizations.

What are the obstacles to diversity in technological fields?

CM: Organizations send out signals, marking their territory. On company websites, it is often men who are represented in high-responsibility jobs. In engineering schools, representations are also heavily masculine, from school brochures to gala posters produced by student associations. The masculine image dominates. There is also a second dimension, that of recognition. In technology professions, women are often suspected of illegitimacy, they are often required to prove themselves. Women who reach a high status in the hierarchy of a company, or who excel in elite courses, feel this discreet suspicion and it can make them doubt themselves.

What does a good inclusion policy involve?

CM: We carried out a sociological study on several inclusion policies used in organizations. A successful example is that of Carnegie-Mellon University in the United States. They were first asked to undertake an analysis of their practices. They realized that they were setting up barriers to women entering technology courses. For example, in selecting students, they were judging applicants on their prior experience in IT, things that are not taught in schools. They expected students to have skills inherited from a hacker culture or other social context favoring the development of these skills. However, the university realized that not only are these skills usually shared in masculine environments, but also that they are not a determining factor in successful completion of studies. They reviewed their admission criteria. This is a good example of analyzing the space and organization in terms of inclusion. In one year, the percentage of female students in IT rose from 7% to 16%, reaching a stable level of 40% after four years. The percentage of female applicants accepted who then chose to enroll more than doubled in a few years.

Read on I’MTech: Gender diversity in ICT as a topic of research

Once women have joined these spaces, is the problem solved?

CM: Not at all. Once again, Carnegie-Mellon University is a good example. On average, female students were giving up their IT studies twice as often as men. This is where notions of culture and relations come in. New students were subject to rumors about quotas. The men believed the women were only there to satisfy statistics, because they themselves had been conditioned by clichés on the respective skills of men and women in IT. The university’s response was a compulsory first-year course on gender and technologies, to break down preconceived ideas.

How necessary is it to use compulsory measures?

CM: There are two major reasons. On the one hand, stereotypes are even stronger when they are activated subconsciously: we therefore have to create conditions under which we can change the views of people within a group. In this case, the compulsory course on gender or the differentiated first-year courses enable all students to take the same courses in the second year, boost self-confidence and create a common knowledge base. The measure improved the group’s motivation and their desire to move forward. Cultural change is generally slow, especially when the non-included population is strongly in a minority. This is why we have to talk about quotas. Everyone is very uneasy with this idea, but it is an interim solution, which will lead to rapid progress in the situation. For example, the Norwegian University of Science and Technology (NTNU), another case of successful inclusion, decided to open additional places for women only. Along with a very clear communication strategy, this approach saw female student numbers rise from 6% to 38% in one year and saw the creation of a “community” of female engineers.  The percentage of women admitted stabilized, and the quotas were abandoned after three years. The issue of separate spaces is also interesting. Carnegie-Mellon, for example, launched an association for female IT student which it still supports. With help from the school’s management, this association organizes events with professional females, as women felt excluded from the traditional alumni networks. It has become the largest student association on campus, and now that the transition period is over, they are gradually opening up to other forms of diversity, such as ethnic.

Is there such a thing as bad inclusion measures?

CM: Generally speaking, all measures aimed at promoting women as women are problematic. The Norwegian University of Science and Technology is an example of this. In 1995, it launched an inclusion program attracting women by taking the “difference” approach, the idea that they were complementary to men. This program was statistically successful: there was an increase in the number of women in technology courses. Sociological studies also showed that women felt wanted in these training spaces. But the studies also showed that these women were embarrassed, the notion of complementarity implied that the university considered that women’s strong points were different from men’s. This is not true, and here we see the fundamental difference with Carnegie-Mellon, which attracted women by breaking down this type of cliché.

Since 1995, has this stance on complementarity changed?

CM: At the Norwegian University of Science and Technology, yes. After the reports from female students, the approach was largely modified. Unfortunately, the idea of complementarity is still too present, especially in companies. All too often, we hear things like “having a woman in a team improves communication” or “a feminine presence softens the atmosphere”. Not only is there no sociological reality behind these ideas, but also they impose qualities women are expected to have. This is the performative side of gender: we conform to what is considered appropriate and expected of us. A highly talented woman in a role which does not require any particular communication skills will be judged preferentially on these criteria rather than on her actual tasks. This representation must be broken down. Including women is not important because they improve the atmosphere in a team. It is important because they represent as large a talent pool as men.

 

The “Miharu Takizakura”, a weeping cherry tree over a thousand years old. The tree is on a soil contaminated by the Fukushima incident.

The cherry trees of Fukushima

Written by Franck Guarnieri, Aurélien Portelli, et Sébastien Travadel, Mines ParisTech.
The original version has been published on The Conversation.

[divider style=”normal” top=”20″ bottom=”20″]

[dropcap]I[/dropcap]t’s 2019 and for many, the Fukushima Daiichi nuclear disaster has become a distant memory. In the West, the event is considered to be over. Safety standards have been audited and concerns about the sector’s security and viability officially addressed.

Still, the date remains traumatic. It reminds us of our fragility in the face of overpowering natural forces, and those that we unleashed ourselves. A vast area of Japan was contaminated, tens of thousands of people were forced from their homes, businesses closed. The country’s nuclear power plants were temporarily shut down and fossil fuel consumption rose sharply to compensate.

There’s also much work that remains to be done. Dismantling and decontamination will take several decades and many unprecedented challenges remain. Reactors are still being cooled, spent fuel must be removed, radioactive water has to be treated. Radioactivity measurements on site cannot be ignored, and are a cause for concern for the more than 6,000 people working there. Still, the risk of radiation may be secondary given the ongoing risk of earthquakes and tsunamis.

Surprisingly, for many Japanese the disaster has a different connotation – it’s seen as having launched a renaissance.

Rebuilding the world

Our research team became aware of the need to re-evaluate our ideas about the Fukushima Daiichi nuclear disaster during a visit in March 2017. As we toured the site, we discovered a hive of activity – a place where a new relationship between man, nature and technology is being built. The environment is completely artificial: all vegetation has been eradicated and the neighbouring hills are covered by concrete. Yet, at the heart of this otherworldly landscape there are cherry trees – and they were in full bloom. Curious, we asked the engineer who accompanied us why they were still there. His answer was intriguing. The trees will not be removed, even though they block access routes for decontamination equipment.

Cherry trees have many meanings for the Japanese. Since ancient times, they have been associated with 気 (ki, or life force) and, in some sense, reflect the Japanese idea of time as never-ending reinvention. A striking example is that in the West we exhibit objects in museums, permanently excluding them from everyday life. This is happen in Japan. For example, the treasures making up the Shōsō-in collection are only exhibited once each year and the purpose is not to represent the past, but to show that these objects are still in the present. Another illustration is the Museum Meiji-mura, where more than 60 buildings from the Meiji era (1868-1912) have been relocated. The idea of ongoing reinvention is manifested in buildings that have retained their original function: visitors can send a postcard from a 1909 post office or ride on an 1897 steam locomotive.

We can better understand the relationship between this perception of time and events at Fukushima Daiichi by revisiting the origins of the country’s nuclear-power industry. The first facility was a British Magnox reactor, operational from 1966 to 1998. In 1971, construction of the country’s first boiling-water reactor was supervised by the US firm General Electric. These examples illustrate that for Japan, nuclear power is a technology that comes from elsewhere.

When the Tōhoku earthquake and tsunami struck in 2011, Japan’s inability to cope with the unfolding events at the Fukushima Daiichi nuclear power plant stunned the world. Eight years having passed, for the Japanese it is the nation’s soul that must be resuscitated. Not by the rehabilitation of a defective “foreign” object, but by the creation of new, “made in Japan” technologies. Such symbolism not only refers to the work being carried out by the plant’s operator, the Tokyo Electric Power Company (TEPCO), but also reflects Japanese society.

A photo selected for the NHK Fukushima cherry-tree competition. NHK/MCJP

The miraculous cherry tree

Since 2012, the Japanese public media organisation NHK has organised the “Fukushima cherry tree” photo competition to symbolise national reconstruction. Yumiko Nishimoto’s “Sakura Project” has the same ambition. Before the Fukushima disaster, Nishimoto lived in the nearby town of Naraha. Her family was evacuated and she was only able to return in 2013. Once home, she launched a national appeal for donations to plant 20,000 cherry trees along the prefecture’s 200-kilometre coastline. The aim of the 10-year project is simply to restore hope among the population and the “determination to create a community”, Nishimoto has said. The idea captured the country’s imagination and approximately a thousand volunteers turned up to plant the first trees.

More recently, the “Miharu Takizakura” cherry tree has made headlines. More than 1,000 years old and growing in land contaminated by the accident, its presence is seen as a miracle and it attracts tens of thousands of visitors. The same sentiment is embodied in the Olympic torch relay, which will start from Fukushima on March 26, 2020, for a 121-day trip around Japanese prefectures during the cherry-blossom season.

Fukushima, the flipside of Chernobyl?

This distinctly Japanese perception of Fukushima contrasts with its interpretation by the West, and suggests that we re-examine the links between Fukushima and Chernobyl. Many saw the Fukushima disaster as Chernobyl’s twin – another example of the radioactive “evil”, a product of the industrial hubris that had dug the grave of the Soviet Union.

In 1986 the fatally damaged Chernobyl reactor was encased in a sarcophagus and the surrounding area declared a no-go zone. Intended as a temporary structure, in 2017 the sarcophagus was in turn covered by the “New Safe Confinement”, a monumental structure designed to keep the site safe for 100 years. This coffin in a desert continues to terrify a population that is regularly told that it marks the dawn of a new era for safety.

Two IAEA agents examine work Unit 4 of the Fukushima Daiichi Nuclear Power Station (April 17, 2013).Greg Webb/IAEA, CC BY

Two IAEA agents examine work Unit 4 of the Fukushima Daiichi Nuclear Power Station (April 17, 2013). Greg Webb/IAEA, CC BY

At the institutional level, the International Atomic Energy Agency (IAEA) responded to Chernobyl with the concept of “safety culture”. The idea was to resolve, once and for all, the issue of nuclear power plant safety. Here, the Fukushima accident had little impact: Infrastructure was damaged, the lessons learned were incorporated into safety standards, and resolutions were adopted to bring closure. In the end, the disaster was unremarkable – no more than a detour from standard procedures that had been established following Chernobyl. For the IAEA, the case is closed. The same applies to the nuclear sector as a whole, where business has resumed more or less as usual.

To some extent, Japan has fallen in line with these ideas. The country is improving compliance with international regulations and increased its contribution to the IAEA’s work on earthquake response. But this Western idea of linear time is at odds with the country’s own understanding of the disaster’s framework. For many Japanese, events are still unfolding.

While Chernobyl accelerated the collapse of the Soviet Union, Fukushima Daiichi has become a showcase for the Japanese government. The idea of ongoing reinvention extends to the entire region, through a policy of repopulation. Although highly controversial, this approach stands in stark contrast to Chernobyl, which remains isolated and abandoned.

Other differences are seen in the reasons given for the causes of the accident: The IAEA concluded that the event was due to a lack of safety culture – in other words, organisational failings led to a series of unavoidable effects that could have been predicted – while Japanese scientists either drew an analogy with events that occurred during the Second World War, or attributed the accident to the characteristics of the Japanese people.

Before one dismisses such conclusions as irrational, it’s essential to think again about the meaning of the Fukushima disaster.

The Conversation

The installation of a data center in the heart of a city, like this one belonging to Interxion in La Plaine Commune in Île-de-France, rarely goes unnoticed.

Data centers: when digital technology transforms a city

As the tangible part of the digital world, data centers are flourishing on the outskirts of cities. They are promoted by elected representatives, sometimes contested by locals, and are not yet well-regulated, raising new social, legal and technical issues. Here is an overview of the challenges this infrastructure poses for cities, with Clément Marquet, doctoral student in sociology at Télécom ParisTech, and Jean-Marc Menaud, researcher specialized in Green IT at IMT Atlantique.

 

On a global scale, information technology contributes to almost 2% of global greenhouse gas emissions, which is as much as civil aviation. In addition, “The digital industry consumes 10% of the world’s energy production” explains Clément Marquet, sociology researcher at Télécom Paristech, who is studying this hidden side of the digital world. The energy consumption required for infrastructure to run smoothly, under the guise of ensuring reliability and maintaining a certain level of service quality, is of particular concern.

With the demand for real-time data, the storage and processing of these data must be carried out where they are produced. This explains why data centers have been popping up throughout the country over the past few years. But not just anywhere. There are close to 150 throughout France. “Over a third of this infrastructure is concentrated in Ile-de-France, in Plaine Commune – this is a record in Europe. It ends up transforming urban areas, and not without sparking reactions from locals,” the researcher says.

Plaine Commune, a European Data Valley

In his work, Clément Marquet questions why these data centers are integrated into urban areas. He highlights their “furtive” architecture, as they are usually built “in new or refitted warehouses, without any clues or signage about the activity inside”. He also looks at the low amount of interest from politicians and local representatives, partly due to their lack of knowledge on the subject. He takes Rue Rateau in La Courneuve as an example. On one side of this residential street, just a few meters from the first houses, a brand-new data center was inaugurated at the end of 2012 by Interxion. The opening of this installation did not run smoothly, as the sociologist explains:

“These 1,500 to 10,000 m2 spaces have many consequences for the surrounding urban area. They are a burden on energy distribution networks, but that is not all. The air conditioning units required to keep them cool create noise pollution. Locals also highlight the risk of explosion due to the 568,000 liters of fuel stored on the roof to power the backup generator, and the fact that energy is not recycled in the city heating network. Across the Plaine Commune agglomeration, there are also concerns regarding the low number of jobs created locally compared with the property occupied. It is no longer just virtual.”

Because these data centers have such high energy needs, the debate in Plaine Commune has centered on the risk of virtual saturation in electricity. Data centers store more energy than they really consume, in order to deal with any shortages. This stored electricity cannot be put to other uses. And so, while La Courneuve is home to almost 27,000 inhabitants, the data center requires the equivalent of a city of 50,000 people. The sociologist argues that there was no consultation of the inhabitants when this building was installed. They ended up taking legal action against the installation. “This raises the question of the viability of these infrastructures in the urban environment. They are invisible and yet invasive”.

Environmentally friendly integration possible

One of the avenues being explored to make these data centers more virtuous and more acceptable is to integrate environmentally friendly characteristics, hooking them up to city heating networks. Data centers could become energy producers, rather than just consumers. In theory, this would make it possible to heat pools or houses. However, it is not an easy operation. In 2015 in La Courneuve, Interxion had announced that it would connect a forthcoming 20,000 m² center. They did not follow through, breaking their promise of a change in their practice. Connecting to the city heating network requires major, complicated coordination between all parties. The project was faced with reluctance by the hosts to communicate on their consumption. Also, hosts do not always have the tools required to recycle heat.

Another possibility is to optimize the energy performance of data centers. Many Green IT researchers are working on environmentally responsible digital technology. Jean-Marc Menaud, coordinator of the collaborative project EPOC (Energy Proportional and Opportunistic Computing systems) and director of the CPER SeDuCe project (Sustainable Data Center), is one of these researchers. He is working on the anticipation of consumption, or predicting the energy needs of an application, combined with anticipating electricity production. “Energy consumption by digital technologies is based on three foundations: one third is due to the non-stop operation of data centers, one third is due to the Internet network itself” he explains, and the last third comes down to user terminals and smart objects.

Read on I’MTech: Data centers, taking up the energy challenge

Since summer 2018, the IMT Atlantique campus has hosted a new type of data center, one devoted to research, and available for use by the scientific community. “The main objective of SeDuce is to reduce the energy consumed by the air conditioning system. Because in energy, nothing goes to waste, everything can be transformed. If we want the servers to run well, we have to evacuate the heat, which is at around 30-35°C. Air conditioning is therefore vital” he continues. “And in the majority of cases, air conditioning is colossal: for 100 watts required to run a server, another 100 are used to cool it down”.

How does SeDuCe work? The data center, with a 1,000-core or 50-server capacity, is full of sensors and probes closely monitoring temperatures. The servers are isolated from the room in airtight racks. This airtight confinement makes it possible to optimize cooling costs tenfold: for 100 watts used by the servers, only 10 watts are required to cool them. “Soon, SeDuCe will be powered during the daytime by solar panels. Another of our goals is to get users to adapt the way they work according to the amount of energy available. A solution that can absolutely be applied to even the most impressive data centers.” Proof that energy transition is possible via clouds too.

 

Article written by Anne-Sophie Boutaud, for I’MTech.

biais des algorithmes, algorithmic bias

Algorithmic bias, discrimination and fairness

David Bounie, Professor of Economics, Head of Economics and Social Sciences at Télécom ParisTech

Patrick WaelbroeckProfessor of Industrial Economy and Econometrics at Télécom ParisTech and co-founder of the Chair Values and Policies of Personal Information

[divider style=”dotted” top=”20″ bottom=”20″]

The original version of this article was published on the website of the Chair Values and Policies of Personal Information. This Chair brings together researchers from Télécom ParisTech, Télécom SudParis and Institut Mines Télécom Business School, and is supported by the Mines-Télécom Foundation.

[divider style=”dotted” top=”20″ bottom=”20″]

 

[dropcap]A[/dropcap]lgorithms rule our lives. They increasingly intervene in our daily activities – i.e. career paths, adverts, recommendations, scoring, online searches, flight prices – as improvements are made in data science and statistical learning.

Despite being initially considered as neutral, they are now blamed for biasing results and discriminating against people, voluntarily or not, according to their gender, ethnicity or sexual orientation. In the United States, studies have shown that African American people were more penalised in court decisions (Angwin et al., 2016). They are also discriminated against more often on online flat rental platforms (Edelman, Luca and Svirsky, 2017). Finally, online targeted and automated ads promoting job opportunities in the Science, Technology, Engineering and Mathematics (STEM) fields seem to be more frequently shown to men than to women (Lambrecht and Tucker, 2017).

Algorithmic bias raises significant issues in terms of ethics and fairness. Why are algorithms biased? Is bias unpreventable? If so, how can it be limited?

Three sources of bias have been identified, in relation to cognitive, statistical and economic aspects. First, algorithm results vary according to the way programmers, i.e. humans, coded them, and studies in behavioural economics have shown there are cognitive biases in decision-making.

  • For instance, a bandwagon bias may lead a programmer to follow popular models without checking whether these are accurate.
  • Anticipation and confirmation biases may lead a programmer to favour their own beliefs, even though available data challenges such beliefs.
  • Illusory correlation may lead someone to perceive a relationship between two independent variables.
  • A framing bias occurs when a person draws different conclusions from a same dataset based on the way the information is presented.

Second, bias can be statistical. The phrase ‘Garbage in, garbage out’ refers to the fact that even the most sophisticated machine will produce incorrect and potentially biased results if the input data provided is inaccurate. After all, it is pretty easy to believe in a score produced by a complex proprietary algorithm and seemingly based on multiple sources. Yet, if the data set based on which the algorithm is trained to learn to categorise or predict is partial or inaccurate, as is often the case with fake news, trolls or fake identities, results are likely to be biased. What happens if the data is incorrect? Or if the algorithm is trained using data from US citizens, who may behave much differently from European citizens? Or even, if certain essential variables are omitted? For instance, how might machines encode relational skills and emotional intelligence (which are hard to get for machines as they do not feel emotions), leadership skills or teamwork in an algorithm? Omitted variables may lead an algorithm to produce a biased result for the simple reason the omitted variables may be correlated with the variables used in the model. Finally, what happens when the training data comes from truncated samples or is not representative of the population that you wish to make predictions for (sample-selection bias)? In his Nobel Memorial Prize-winning research, James Heckman showed that selection bias was related to omitted-variable bias. Credit scoring is a striking example. In order to determine which risk category a borrower belongs to, algorithms rely on data related to people who were eligible for a loan in a particular institution – they ignore files of people who were denied credit, did not need a loan or got one in another institution.

Third, algorithms may bias results for economic reasons. Think of online automated advisors specialised in selling financial services. They can favour the products of the company giving the advice, at the expense of the consumer if these financial products are more expensive than the market average. Such situation is called price discrimination. Besides, in the context of multi-sided platforms, algorithms may favour third parties who have signed agreements with the platform. In the context of e-commerce, the European Commission recently fined Google 2.4bn euros for promoting its own products at the top of search results on Google Shopping, to the detriment of competitors. Other disputes have occurred in relation to the simple delisting of apps in search results on the Apple Store or to apps being downgraded in marketplaces’ search results.

Algorithms thus come with bias, which seems unpreventable. The question now is: how can bias be identified and discrimination be limited? Algorithms and artificial intelligence will indeed only be socially accepted if all actors are capable of meeting the ethical challenges raised by the use of data and following best practice.

Researchers first need to design fairer algorithms. Yet what is fairness, and which fairness rules should be applied? There is no easy answer to these questions, as debates have opposed researchers in social science and those in philosophy for centuries. Fairness is a normative concept, many definitions of which are incompatible. For instance, compare individual fairness and group fairness. One simple criterion of individual fairness is that of equal opportunity, the principle according to which individuals with identical capacities should be treated similarly. However, this criterion is incompatible with group fairness, according to which individuals of the same group, such as women, should be treated similarly. In other words, equal opportunity for all individuals cannot exist if a fairness criterion is applied on gender. These two notions of fairness are incompatible.

A second challenge faces companies, policy makers and regulators, whose duty it is to promote ethical practices – transparency and responsibility – through an efficient regulation of the collection and use of personal data. Many issues arise. Should algorithms be transparent and therefore audited? Who should be responsible for the harm caused by discrimination? Is the General Data Protection Regulation fit for algorithmic bias? How could ethical constraints be included? Admittedly they could increase costs for society at the microeconomic level, yet they could help lower the costs of unfairness and inequality stemming from an automated society that wouldn’t comply with the fundamental principles of unbiasedness and lack of systematic discrimination.

Read on I’MTech Ethics, an overlooked aspect on algorithms?

Using personalised services without privacy loss: what solutions does technology have to offer?

Online services are becoming more and more personalised. This transformation designed to satisfy the end-user might be seen as an economic opportunity, but also as a risk, since personalised services usually require personal data to be efficient. Two perceptions that do not seem compatible. Maryline Laurent and Nesrine Kaâniche, researchers at Telecom SudParis and members of the Chair Values and Policies of Personal Information, tackle this tough issue in this article. They give an overview of how technology can solve this equation by allowing both personalization and privacy. 

[divider style=”normal” top=”20″ bottom=”20″]

This article has initially been published on the Chair Values and Policies of Personal Information website.

[divider style=”normal” top=”20″ bottom=”20″]

 

[dropcap]P[/dropcap]ersonalised services have become a major stake in the IT sector as they require actors to improve both the quality of the collected data and their ability to use them. Many services are running the innovation race, namely those related to companies’ information systems, government systems, e-commerce, access to knowledge, health, energy management, leisure and entertaining. The point is to offer end-users the best possible quality of experience, which in practice implies qualifying the relevance of the provided information and continuously adapting services to consumers’ uses and preferences.

Personalised services offer many perks, among which targeted recommendations based on interests, events, news, special offers for local services or goods, movies, books, and so on. Search engines return results that are usually personalised based on a user’s profile and actually start personalising as soon as a keyword is entered, by identifying semantics. For instance, the noun ‘mouse’ may refer to a small rodent if you’re a vet, a stay mouse if you’re a sailor, or a device that helps move the cursor on a computer screen if you’re an Internet user. In particular, mobile phone applications use personalisation; health and wellness apps (e.g. the new FitBit and Vivosport trackers) can come in very handy as they offer tips to improve one’s lifestyle, help users receive medical care remotely, or warn them on any possible health issue they detect as being related to a known illness.

How is personalisation technologically carried out?

When surfing on the Internet and using mobile phone services or apps, users are required to authenticate. Authentication allows to connect their digital identity with the personal data that is saved and collected from exchanges. Some software packages also include trackers, such as cookies, which are exchanged between a browser and a service provider or even a third party and allow to track individuals. Once an activity is linked to a given individual, a provider can easily fill up their profile with personal data, e.g. preferences and interests, and run efficient algorithms, often based on artificial intelligence (AI), to provide them with a piece of information, a service or targeted content. Sometimes, although more rarely, personalisation may rely solely on a situation experienced by a user – the simple fact they are geolocated in a certain place can trigger an ad or targeted content to be sent to them.

What risks may arise from enhanced personalisation?

Enhanced personalisation causes risks for users in particular. Based on geolocation data only, a third party may determine that a user goes to a specialised medical centre to treat cancer, or that they often spend time at a legal advice centre, a place of worship or a political party’s local headquarters. If such personal data is sold on a marketplace[1] and thus made accessible to insurers, credit institutions, employers and lessors, their use may breach user privacy and freedom of movement. And this is just one kind of data. If these were to be cross-referenced with a user’s pictures, Internet clicks, credit card purchases and heart rate… What further behavioural conclusions could be drawn? How could those be used?

One example that comes to mind is price discrimination,[2] i.e. charging different prices for the same product or service to different customers according to their location or social group. Democracies can also suffer from personalisation, as the Cambridge Analytica scandal has shown. In April 2018, Facebook confessed that U.S. citizens’ votes had been influenced through targeted political messaging in the 2016 election.

Responsible vs. resigned consumers

As pointed out in a survey carried out by the Chair Values and Policies of Personal Information (CVPIP) with French audience measurement company Médiamétrie,[3] some users and consumers have adopted data protection strategies, in particular by using software that prevents tracking or enables anonymous online browsing… Yet this requires them to make certain efforts. According to their purpose, they either choose a personalised service or a generic one to gain a certain control over their informational profile.

What if technology could solve the complex equation opposing personalised services and privacy?

Based on this observation, the Chair’s research team carried out a scientific study on Privacy Enhancing Technologies (PETs). In this study, we list the technologies that are best able to meet needs in terms of personalised services, give technical details about them and analyse them comparatively. As a result, we suggest classifying these solutions into 8 families, which are themselves grouped into the following 3 categories:

  • User-oriented solutions. Users manage the protection of their identity by themselves by downloading software that allows them to control outgoing personal data.Protection solutions include attribute disclosure minimisation and noise addition, privacy-preserving certification,[4] and secure multiparty calculations (i.e. distributed among several independent collaborators).
  • Server-oriented solutions. Any server we use is strongly involved in personal data processing by nature. Several protection approaches focus on servers, as these can anonymise databases in order to share or sell data, run heavy calculations on encrypted data upon customer request, implement solutions for automatic data self-destruction after a certain amount of time, or Privacy Information Retrieval solutions for non-intrusive content search tools that confidentially return relevant content to customers.
  • Channel-oriented solutions. What matters here is the quality of the communication channel that connects users with servers, be it intermediated and/or encrypted, and the quality of the exchanged data, which may be damaged on purpose. There are two approaches to such solutions: securing communications and using trusted third parties as intermediators in a communication.

Some PETs are strictly in line with the ‘data protection by design’ concept as they implement data disclosure minimisation or data anonymisation, as required by Article 25-1 of the General Data Protection Regulation (GDPR).[5] Data and privacy protection methods should be implemented at the earliest possible stages of conceiving and developing IT solutions.

Our state of the art shows that using PETs raises many issues. Through a cross-cutting analysis linking CVPIP specialists’ different fields of expertise, we were able to identify several of these challenges:

  • Using AI to better include privacy in personalised services;
  • Improving the performance of existing solutions by adapting them to the limited capacities of mobile phone personalised services;
  • Looking for the best economic trade-off between privacy, use of personal data and user experience;
  • Determining how much it would cost industrials to include PETs in their solutions in terms of development, business model and adjusting their Privacy Impact Assessment (PIA);
  • PETs seen as a way of bypassing or enforcing legislation.
click

From personal data to artificial intelligence: who benefits from our clicking?

Clicking, liking, sharing: all of our digital activities produce data. This information, which is collected and monetized by big digital information platforms, is on its way to becoming the virtual black gold of the 21st century. Have we all become digital workers? Digital labor specialist and Télécom ParisTech researcher Antonio Casilli has recently published a work entitled En attendant les robots, enquête sur le travail du clic (Waiting for Robots, an Inquiry into Click Work). He sat down with us to shed some light on this exploitation 2.0.

 

Who we are, what we like, what we do, when and with whom: our virtual personal assistants and other digital contacts know everything about us. The digital space has become the new sphere of our private lives. This virtual social capital is the raw material for tech giants. The profitability of digital platforms like Facebook, Airbnb, Apple and Uber relies on the massive analysis of users’ data for advertising purposes. In his work entitled En attendant les robots, enquête sur le travail du clic (Waiting for Robots, an Inquiry into Click Work), Antonio Casilli explores the emergence of surveillance capitalism, an opaque and invisible form of capitalism marking the advent of a new form of digital proletariat: digital labor – or working with our digits. From the click worker who performs microtasks, who is aware of and paid for his activity, to the user who produces data implicitly, the sociologist analyzes the hidden face of this work carried out outside the world of work, and the all too-tangible reality of this intangible economy.

Read on I’MTech What is digtal labor?

Antonio Casilli focuses particularly on net platforms’ ability to put their users to work, convinced that they are consumers more than producers. “Free access to certain digital services is merely an illusion. Each click fuels a vast advertising market and produces data which is mined to develop artificial intelligence. Every “like”, post, photo, comment and connection fulfils one condition: producing value. This digital labor is either very poorly paid or entirely unpaid, since no one receives compensation that measures up to the value produced. But it is work nevertheless: a source of value that is traced, measured, assessed and contractually-regulated by the platforms’ terms and conditions for use,” explains the sociologist.

The hidden, human face of machine learning

For Antonio Casilli, digital labor is a new form of work which remains invisible, but is produced from our digital traces. Far from marking the disappearance of human labor with robots replacing the work they once did, this click work challenges the boundaries between work that is produced implicitly and formally recognizable employment. And for good reason: microworkers paid by the task or user-producers like ourselves are indispensable to these platforms. This data serves as the basis for machine learning models: behind the automation of a given task, such as visual or text recognition, humans are actually fueling applications by indicating clouds on images of the sky, for example, or by typing out words.

“As conventional wisdom would have it, these machines learn by themselves. But to train their algorithms to calibrate, or to improve their services, platforms need a huge number of people to train and test them,” says Antonio Casilli. One of the best-known examples is Mechanical Turk, a service offered by the American giant Amazon. Ironically, its name is a reference to a hoax that dates back to the 18th century. An automaton chess player, called the “Mechanical Turk” was able to win games against human opponents. But the Turk was actually operated by a real human hiding inside.

Likewise, certain so-called “smart” services rely heavily on unskilled workers: a sort of “artificial” artificial intelligence. In this work designed to benefit machines, digital workers are poorly paid to carry out micro-tasks. “Digital labor marks the appearance of a new way of working which can be called “taskified,” since human activity is reduced to a simple click; and “datafied” because it’s a matter of producing data so that digital platforms can obtain value from it,” explains Antonio Casilli. And this is how data can do harm. Alienation and exploitation: in addition to the internet task workers in northern countries, more commonly their counterparts in India, the Philippines and other developing countries with low average earnings, are sometimes paid less than one cent per click.

Legally regulating digital labor?

For now, these new forms of work are exempt from salary standards. Nevertheless, in recent years there has been an increasing number of class action suits against tech platforms to claim certain rights. Following the example of Uber drivers and Deliveroo delivery people, individuals have taken legal action in an attempt to have their commercial contracts reclassified as employment contacts. Antonio Casilli sees three possible ways to help combat job insecurity for digital workers and bring about social, economic and political recognition of digital labor.

From Uber to platform moderators, traditional labor law—meaning reclassifying workers as salaried employees—could lead to the recognition of their status. But dependent employment may not be a one-size-fits-all” solution. There are also a growing number of cooperative platforms being developed, where the users become owners of the means of production and algorithms.” Still, for Antonio Casilli, there are limitations to these advances. He sees a third possible solution. “When it comes to our data, we are not small-scale owners or small-scale entrepreneurs. We are small-scale data workers. And this personal data, which is neither private nor public, belongs to everyone and no one. Our privacy must be a collective bargaining tool. Institutions must still be invented and developed to make it into a real common asset. The internet is a new battleground,” says the researcher.

Toward taxation of the digital economy

Would this make our personal data less personal? “We all produce data. But this data is, in effect, a collective resource, which is appropriated and privatized by platforms. Instead of paying individuals for their data on a piecemeal basis, these platforms should return, give back, the value extracted from this data, to national or international authorities, through fair taxation, explains Antonio Casilli. In May of 2018, the General Data Protection Regulation (GDPR) came into effect in the European Union. Among other things, this text protects data as a personality attribute instead of as property. Therefore, in theory, everyone can now freely consentat any momentto the use of their personal data and withdraw this consent just as easily.

While in its current form, regulation involves a set of protective measures, setting up a tax system like the one put forward by Antonio Casilli would make it possible to establish an unconditional basic income. The very act of clicking or sharing information could give individuals a right to these royalties and allow each user to be paid for any content posted online. This income would not therefore be linked to the tasks carried out but would recognize the value created through these contributions. In 2020, over 20 billion devices will be connected to the Internet of Things. According to some estimates, the data market could reach nearly €430 billion per year by then, which is equivalent to a third of France’s GDP. Data is clearly a commodity unlike any other.

[divider style=”dotted” top=”20″ bottom=”20″]

En attendant les robots, enquête sur le travail du clic (Waiting for Robots, an Inquiry into Click Work)
Antonio A. Casilli
Éditions du Seuil, 2019
400 pages
24 € (paperback) – 16,99 € (e-book)

 

Original article in French written by Anne-Sophie Boutaud, for I’MTech.

 

migrants

How has digital technology changed migrants’ lives?

Over the past few decades, migrants have become increasingly connected, as have societies in both their home and host countries. The use of new technologies allows them to maintain ties with their home countries while helping them integrate in their new countries. They also play an important role in the process of migration itself. Dana Diminescu, a sociologist at Télécom ParisTech, is exploring this link between migration and digital technology and challenging the traditional image of the uprooted migrant. She explains how new uses have changed migratory processes and migrants’ lives.

 

When did the link between migration and digital technology first appear?

Dana Diminescu: The link really became clear during the migration crisis of 2015. Media coverage highlighted the migrants’ use of smartphones and the public discovered the role telephones play in the migration process. A sort of “technophoria” appeared for refugees. This led to a great number of hackathons being organized to make applications to help immigrants, with varying degrees of success. In reality, the migrants were already connected well before the media hype of 2015. In 2003, I’d already written an epistemological manifesto on the figure of the connected migrant, based on observations dating from the late 1990s.

In 1990, smartphones didn’t exist yet; how were migrants ‘connected’ at that time?

DD: My earliest observation was the use of a mobile phone by a collective of migrants living in a squat. For them, the telephone was a real revolution and an invaluable resource. They used it to develop a network and find contacts. This helped them find jobs and housing, in short, it helped them integrate society. Two years later, those who had been living in the squat had got off the street and the mobile phone played a large role in making this possible.

What has replaced this mobile phone today?

DD: Social media play a very strong role in supporting integration for all migrants, regardless of their home country or cultural capital. One of the first things they do when they get to their country of destination is to use Facebook to find contacts. WhatsApp is also widely used to develop networks. And YouTube helps them learn languages and professional skills.

Dana Diminescu has been studying the link between migrants and new technologies since the late 1990s.

Are digital tools only useful in terms of helping migrants integrate?

DD: No, that’s not all – they also have an immediate, performance-related effect on the migratory process itself. In other words, an individual’s action on social media can lead to almost instantaneous effect on migration movement. A message posted by a migrant showing that he was able to successfully cross the border at a certain place on the Balkan route creates movement. The other migrants will adjust their journey that same day. That’s why we now talk about migration traceability rather than migration movement. Each migrant uses and leaves behind a record of his or her journey. These are the records used in sociology to understand migrants’ choices and actions.

Does the importance of digital technology in migration activity challenge the traditional image of the migrant?

DD: For a long time, humanities research focused on the figure of the uprooted migrant. In this perception of migrants, they are at once absent from their home country, and absent from their destination country since they find it difficult to fit in completely. New technologies have had an impact on this view, because they have made these forms of presence more prominent. Today, migrants can use tools like Skype to see their family and loved ones from afar and instantly. In interviews, migrants tell me, “I don’t have anything to tell them when I go back to see them since I’ve already told them everything on Skype.” As for presence in their destination countries, digital tools play an increasingly important role in access, whether for biometric passports or cards to access work, transport etc. For migrants, the use of these different tools makes their way of life very different to the way it would have been a few years ago, when such access had not yet been digitized. It is now easier for them to exercise their rights.

Does this have an effect on the notion of borders?

DD: Geographical borders don’t have the same meaning they used to. As one migrant explained in his account one day, “They looked for me on the screen, they didn’t find me, I got through.” Borders are now based on our personal data: they’re connected to our date of birth, digital identities, locations. These are the borders migrants have to get around today. That’s why their telephones are confiscated by smugglers so that they can’t be traced, or why they don’t bring their smartphones with them, so that border police won’t be able to force them to open Facebook.

So digital technology can represent a hurdle for migrants?

DD: Since the migrants are connected, they can, of course, be traced. This limiting aspect of digital technology also exists in the uses of new technology in destination countries. Technology has increased the burden of the informal contract between those who leave and those who stay behind. Families expect migrants to be very present. They expect individuals to be available for them at the times they’re used to spending in their company. In interviews, migrants say that it’s a bit like a second job. They don’t want to appear as if they have broken away, they have to check in. At times, this leads to migrants’ lying, saying that they’ve lost their mobile phone or that they don’t have internet access, to free themselves from the burden of this informal contract. In this case, digital technology is seen as a constraint, and at times it can even be highly detrimental to social well-being.

In what sort of situations is this the case?

DD: In refugee camps, we’ve observed practices that cut migrants off from social ties. In Jordan, for example, it’s impossible to send children to get food for their parents. Individuals must identify themselves with a biometric eye scanner and that’s the only way for them to receive their rations. If they can’t send their children, they can’t send their friends or neighbors either. There is a sort of destruction of the social fabric and support networks. Normal community relationships become impossible for these refugees. In a way, these technologies have given rise to new forms of discrimination.

Does this mean we must remain cautious?

DD: We must be wary of digital solutionism. We conducted a research project with Simplon on websites that provide assistance for migrants. A hundred sites were listed. We found that for the most part, the sites were either not usable or not finalized — and when they are, they’re rarely used. Migrants still prefer using social media over specific digital tools. For example, they would still rather learn a language with Google Translate than use a language learning application. They realize that they need certain things to facilitate their learning and integration process. It’s just that the tools that have been developed for these purposes aren’t effective. So we have to be cautious and acknowledge that there are limitations to digital technology. What could we delegate to a machine in the realm of hospitality? How many humans are there behind training programs and personal support organizations?