Artificial Intelligence: the complex question of ethics

The development of artificial intelligence raises many societal issues. How do we define ethics in this area? Armen Khatchatourov, a philosopher at Télécom École de Management and member of the IMT chair “Values and Policies of Personal Information”, observes and carefully analyzes the proposed answers to this question. One of his main concerns is the attempt to standardize ethics using legislative frameworks.

In the frantic race for artificial intelligence, driven by GAFA[1], with its increasingly efficient algorithms and ever-faster automated decisions, engineering is king, supporting this highly-prized form of innovation. So, does philosophy still have a role to play in this technological world that places progress at the heart of every objective? Perhaps that of a critical observer. Armen Khatchatourov, a researcher in philosophy at Télécom École de Management, describes his own approach as acting as “an observer who needs to keep his distance from the general hype over anything new”. Over the past several years he has worked on human-computer interactions and the issues of artificial intelligence (AI), examining the potentially negative effects of automation.

In particular, he analyses the problematic issues arising from legal frameworks established to govern AI. His particular focus is on “ethics by design”. This movement involves integrating the consideration of ethical aspects at the design stage for algorithms, or smart machines in general. Although this approach initially seems to reflect the importance that manufacturers and developers may attach to ethics, according to the researcher, “this approach can paradoxically be detrimental.

 

Ethics by design: the same limitations as privacy by design?

To illustrate his thinking, Armen Khatchatourov uses the example of a similar concept – the protection and privacy of personal information: just like ethics, this subject raises the issue of how we treat other people. “Privacy by design” appeared near the end of the 1990s, in reaction to legal challenges in regulating digital technology. It was presented as a comprehensive analysis of how to integrate the issues of personal information protection and privacy into product development and operational processes. “The main problem is that today, privacy by design has been reduced to a legal text,” regrets the philosopher, referring to the General Data Protection Regulation (GDPR) adopted by European Parliament. “And the reflections on ethics are heading in the same direction,” he adds.

 

There is the risk of losing our ability to think critically.

 

In his view, the main negative aspect of this type of standardized regulation implemented via a legal text is that it eliminates the stakeholders’ feeling of responsibility. “On the one hand, there is the risk that engineers and designers will be happy simply to agree with the text,” he explains. “On the other hand, the consumers will no longer think about what they are doing, and trust the labels attributed by regulators.” Behind this standardization, “there is the risk of losing our ability to think critically.” And he concludes by asking “Do we really think about what we’re doing every day on the Web, or are we simply guided by the growing tendency toward normativeness.”

The same threat exists for ethics. The mere fact of formalizing it in a legal text would work against the reflexivity it promotes. “This would bring ethical reflection to a halt,” warns Armen Khatchatourov. He expands on his thinking by referring to work done by artificial intelligence developers. There always comes a moment when the engineer must translate ethics into a mathematical formula to be used in an algorithm. In practical terms, this can take the form of an ethical decision based on a structured representation of knowledge (ontology, in computer language). “But things truly become problematic if we reduce ethics to a logical problem!” the philosopher emphasizes. “For a military drone, for example, this would mean defining a threshold for civilian deaths, below which the decision to fire is acceptable. Is this what we want? There is no ontology for ethics, and we should not take that direction.

And military drones are not the only area involved. The development of autonomous, or driverless, cars involves many questions regarding how a decision should be made. Often, ethical reflections pose dilemmas. The archetypal example is that of a car heading for a wall that it can only avoid by running over a group of pedestrians. Should it sacrifice its passenger’s life, or save the passenger at the expense of the pedestrians’ lives? There are many different arguments. A pragmatic thinker would focus on the number of lives. Others would want the car to save the driver no matter what. The Massachusetts Institute of Technology (MIT) has therefore developed a digital tool – the Moral Machine – which presents many different practical cases and gives choices to Internet users. The results vary significantly according to the individual. This shows that, in the case of autonomous cars, it is impossible to establish universal ethical rules.

 

Ethics are not a product

Still according to the analogy between ethics and data protection, Armen Khatchatourov brings up another point, based on the reflections of Bruce Schneier, a specialist in computer security. He describes computer security as a process, not a product. Consequently, it cannot be completely guaranteed by a one-off technical approach, or by a legislative text, since both are only valid at certain point in time. Although updates are possible, they often take time and are therefore out of step with current problems. “The lesson we can learn from computer security is that we cannot trust a ready-made solution, and that we need to think in terms of processes and attitudes to be adopted. If we make the comparison, the same can be said for the ethical issues raised by AI,” the philosopher points out.

This is why it is advantageous to think about the framework for processes like privacy and ethics in a different context than the legal one. Yet Armen Khatchatourov recognizes the need for these legal aspects: “A regulatory text is definitely not the solution for everything, but it is even more problematic if no legislative debate exists, since the debate reveals a collective awareness of the issue.” This clearly shows the complexity of a problem to which no one has yet found a solution.

 

[1] GAFA: an acronym for Google, Apple, Facebook, and Amazon.

[box type=”shadow” align=”” class=”” width=””]

Artificial intelligence at Institut Mines-Télécom

The 8th Fondation Télécom brochure, published  (in French) in June 2016, is dedicated to artificial intelligence (AI). It presents an overview of the research taking place in this area throughout the world, and presents the vast body of research underway at Institut Mines-Télécom schools. In 27 pages, this brochure defines intelligence (rational, naturalistic, systematic, emotional, kinesthetic…), looks back at the history of AI, questions its emerging potential, and looks at how it can be used by humans.[/box]

 

Bitcoin, blockchain, Michel Berne, Fabrice Flipo

The bitcoin and blockchain: energy hogs

By Fabrice Flipo and Michel Berne, researchers at Télécom École de Management.
Editorial originally published in
French in The Conversation France

_______________________________________________________________________________________

 

The digital world still lives under the illusion that it is intangible. As governments gathered in Paris at COP21, pledging to reduce their carbon emissions to keep global warming below 2°C, the spread of digital technology continues to take place without the slightest concern for the environment. The current popularity of the bitcoin and blockchain provide the perfect example.

The principle of the blockchain can be summarized as follows: each transaction is recorded in thousands of accounting ledgers, and each one is scrutinized by a different observer. Yet no mention is made of the energy footprint of this unprecedented ledger of transactions, or of the energy footprint of the new “virtual currency” (the bitcoin) it manages.

Read the blog post What is a blockchain?

 

Electricity consumption equivalent to that of Ireland

In a study published in 2014, Karl J. O’Dwyer and David Malone showed that the consumption of the bitcoin network was likely to be approximately equivalent to the electricity consumption of a country like Ireland, i.e. an estimated 3 GW.

Imagine the consequences if this type of bitcoin currency becomes widespread. The global money supply in circulation is estimated at $11,000 billion. The corresponding energy consumption should therefore exceed 4,000 GW, which is 8 times the electricity consumption of France and twice that of the United States. It is not without reason that a recent headline on the Novethic website proclaimed “The bitcoin, a burden for the climate”.

 

What do the numbers say?

Since every blockchain is a ledger (and therefore a file) that exists in many copies, the computer resources required for the calculation, transmission and storage of the information increases, as well as the energy footprint, even if improvements in the underlying technologies are taken into account.

The two important factors here are the length of the blockchain and the number of copies. For the bitcoin, the blockchain’s length grew very quickly: according to Quandl, it was 27 GB in early 2015 and rose to 74 by mid-2016.

The bitcoin, whose system is modeled on that of the former gold standard currencies, is generated through complex computer transactions, which become increasingly complex over time, as for an increasingly depleted goldmine in which production costs rise.

In 2015, Genesis Mining revealed in Business Insider that it was one of the most energy-consuming companies in Iceland, with electricity costs of 60 dollars per “extracted” bitcoin– despite benefiting from a low price per kWh and a favorable climate.

Finally, we can also imagine all the “smart contract” type applications supported by the Internet of Things. This will also have a considerable impact on energy and the environment, considering the manufacturing requirements, the electrical supply (often autonomous, and therefore complicated and not very efficient) and disposal.

However, although the majority of connected objects will probably not support smart contracts, a very large amount of connected objects are anticipated in the near future, with a total likely to reach 30 billion in 2020, according to McKinsey, the American consulting firm.

The bitcoin is just one of the many systems being developed without concern for their energy impact. In response to the climate issue, their promoters act as if it does not exist, or as if alternative energy solutions existed.

 

An increasingly high price to pay

Yet decarbonizing the energy system is a vast issue, involving major risks. And the proposed technical solutions in this area offer no guarantees of being able to handle the massive and global increase in energy consumption, while still reducing greenhouse gas emissions.

Digital technology already accounts for approximately 15% of the national electricity consumption in France, and consumes as much energy, on the global scale, as aviation. Today, nothing suggests that there will be a decrease in the mass to be absorbed, nor is there any indication that digital technology will enable a reduction in consumption, as industrialists in this sector have confirmed (see the publication entitled La Face cachée du numérique – “The hidden face of digital technology”).

The massive decarbonization of energy faces many challenges: the reliability of the many different carbon sequestration techniques proposed, the “energy cannibalism” involved in the launch of renewable energies, which require energy to be manufactured and have technical, social, and political limitations (for example, the various sources of renewable energy require large surface areas, yet the space that could potentially be used is largely occupied)… The challenges are huge.

Réseaux sociaux : comment les professionnels les utilisent ?

The use of social networks by professionals

The proliferation of social networks is prompting professional users to create an account on each network. Researchers from Télécom SudParis wanted to find out how professionals use these different platforms, and whether or not the information they post is the same for all the networks. Their results, based on combined activity on Google+, Facebook and Twitter, have been available online since last June, and in December 2016 will be published in the scientific journal, Social Network Analysis and Mining.

 

 

Athletes, major brands, political figures, singers and famous musicians are all increasingly active on social networks, with the aim of increasing their visibility. So much so, that they have become very professional in their ways of using platforms such as Facebook, Twitter and Google+. Reza Farahbakhsh, a researcher at Télécom SudParis, took a closer look at the complementarity of different social networks. He studied how these users handle posting the same message on one or more of the three platforms mentioned above.

This work, carried out with Noël Crespi (Télécom SudParis) and Ángel Cuevas (Carlos III University of Madrid) showed that 25% of the messages posted on Facebook or Google+ by professional users are also posted on one of the two other social networks. On the other hand, only 3% of tweets appeared on Google+ or Facebook. This result may be explained by the fact that very active users, who publish a lot of messages, find Twitter to be a more appropriate platform.

Another quantitative aspect of this research is that on average, a professional user who cross-posts the same content on different social networks will use the Facebook-Twitter combination 70% of the time, but not Google+. When used as a support platform for duplications, Google+ is associated with Facebook. Yet, ironically, more users post on the three social networks than solely on Google+ and Twitter.

 

Semantic analysis for information retrieval

To measure these values, the researchers first had to identify influential accounts on all three platforms. 616 users were therefore identified. The team then developed software that enabled them to find the posts from all the accounts on Facebook, Twitter and Google+ by taking advantage of these platforms’ programming interfaces. All in all, 2 million public messages were collected by data shuffling.

Following this operation, semantic analysis algorithms were used to identify similar content among the same user’s messages on different platforms. To avoid bias linked to the recurrence of certain habits among users, the algorithms were configured only to search for identical content within a one-week period before and after the message being studied.

 

Cross-posts are more engaging for the target audience

In addition to retrieving information about message content, researchers also measured the number of likes and shares (or retweets) for each message that was collected. This allowed them to measure whether or not cross-posting on several social networks had an impact on fans or followers engaging with the message’s content — in other words, whether or not there would be more likes on Twitter or shares on Facebook for a cross-posted publication than for one that was not cross-posted.

Since the practice of sharing information on other networks is used to better reach an audience, it was fairly predictable that cross-posts on Twitter and Facebook would be more engaging. Researchers therefore observed that, on average, a post initiated on Facebook and reposted on another platform would earn 39% more likes, 32% more comments and 21% more shares than a message that was not cross-posted. For a post initiated on Twitter, the advantage was even greater, with 2.47 times more likes and 2.1 times more shares.

However, the team observed a reverse trend for Google+. A post initiated on this social network and reposted on Twitter or Facebook would barely achieve half the likes earned by a message that was not cross-posted, and one third of the comments and shares. A professional Google+ user would therefore be better off not cross-posting his message on other social networks.

Since this work involves quantitative, and not sociological analysis, Reza Farahbakhsh humbly acknowledges that these last results on public engagement are up for discussion. “One probable hypothesis is that a message posted on Facebook and Twitter has more content than a message that is not cross-posted, therefore resulting in a greater public response,” the researcher suggests.

 

Which platform is the primary source of publication?

Social networks, Noël Crespi

50% of professional users use Facebook as the initial publication source.

On average, half of professional users use Facebook as the initial cross-publication source. 45% prefer Twitter, and only 5% use Google+ as the primary platform. However, the researchers specify that “5.3% of cross-posts are published at the same time,” revealing the use of an automatic and simultaneous publication method on at least two of the three platforms.

 

Although this work does not explain what might motivate the initial preference for a particular network, it does, however, reveal a difference in content, based on which platform is chosen first. For instance, researchers observed that professionals who preferred Twitter posted mostly text content, with links to sites other than social networks, with content that did not change significantly when reposted on another platform. On the other hand, users who preferred Facebook published more audio-visual content, including links to other social platforms.

This quantitative analysis provides a better understanding of the communication strategies used by professional users. To take this research a step further, Reza Farahbakhsh and Noël Crespi would now like to concentrate on how certain events influence public reactions to the posts. This topic could provide insight and guide the choices of politicians during election campaigns, for example, or improve our understanding of how a competition like the Olympic Games may affect an athlete’s popularity.

 

Electronic voting, Télécom SudParis, Annals of Telecommunications

Remote electronic voting – a scientific challenge

Although electronic voting machines have begun to appear at polling stations, many technological barriers still hinder the development of these platforms that could enable us to vote remotely via the Internet. The scientific journal, Annals of Telecommunications, dedicated its July-August 2016 issue to this subject. Paul Gibson, a researcher at Télécom SudParis, and guest editor of this edition, co-authored an article presenting the scientific challenges in this area.

 

In France, Germany, the United States and elsewhere, 2016 and 2017 will be important election years. During these upcoming elections, millions of electors will be called upon to make their voices heard. In this era of major digital transformation, will the upcoming elections be the first to feature remote electronic voting? Probably not, despite the support for this innovation around the world – specifically to facilitate the participation of persons with reduced mobility.

Yet this service presents many scientific challenges. The scientific journal, Annals of Telecommunications, dedicated its 2016 July-August issue to the topic. Paul Gibson, a computer science researcher at Télécom SudParis, co-authored an introductory article providing an overview of the latest developments in remote electronic voting, or e-voting. The article presented the scientific obstacles researchers have yet to overcome.

To be clear, this refers to voting from home via a platform that can be accessed using everyday digital tools (computers, tablets and smartphones), because, although electronic voting machines already exist, they do not dispense electors with having to be physically present at the polling stations.

The main problem stems from the balance that will have to be struck between parameters that are not easily reconciled. An effective e-voting system must enable electors to log onto the online service securely, guarantee their anonymity, and enable them to make sure their vote has been recorded and correctly counted. In the event of an error, the system must also enable electors to correct the vote without revealing their identity. From a technical point of view, researchers themselves remain undecided about the feasibility of this type of solution.

[divider style=”solid” top=”20″ bottom=”20″]

 

Attitudes of certain European countries toward remote e-voting

infographie europe vote electronique

Yellow: envisaging E-voting; Red: have rejected E-voting; Blue: experimenting with E-voting on a small scale (e.g. in France for expatriates); Green: have adopted E-voting; Purple: demands (from citizens) to implement remote voting

Source of data: A review of E-voting: the past, present and future, July 2016
[divider style=”solid” top=”20″ bottom=”20″]

Security and trust remain crucial in e-voting

Even if this type of system becomes a reality, scientists stress the problems posed by a vote taking place outside of government-run venues. “If voters cast their ballots in unmonitored environments, it is reasonable to ask ‘what would prevent individuals with malicious intent from being present and forcing voters to do as they say?’” ask the researchers.

Technological responses exist, in the form of completely verifiable end-to-end computer systems capable of preventing forced voting. They can be combined with cutting-edge cryptographic techniques to ensure the secrecy of the vote. Unfortunately, these security measures make using the system more complex for voters. This makes the voting process more difficult and could be counterproductive, since the primary goal of the e-voting system is to reduce abstentions and encourage more citizens to participate in the election.

In addition, such encrypted and verifiable end-to-end solutions do not guarantee secure authentication. This requires governments to distribute electronic identification keys and implies that electors trust their government system, which is not always the case. However, a decentralized system would open the door to viruses and other malicious software.

 

Electronic voting – a societal issue

Electors and organizers must also trust the network used to transmit the data. Although the Internet seems like the most obvious choice for an operation of this scale, it is also the most dangerous. “The use of a public network like the Internet makes it very difficult to protect a system against denial-of-service attacks,” warns Paul Gibson and his co-authors.

The idea of trust reveals underlying social concerns relating to new voting techniques. The long road ahead of scientists is not only paved with technological constraints. For example, there is not yet any established scientific consensus on the real consequences of e-voting on increasing civic participation and reducing abstentions.

Similarly, although certain economic studies suggest that this type of solution could reduce election costs, this still must be counterbalanced through cost analyses for the construction, use and maintenance of a remote e-voting system. The issue therefore not only raises questions in the area of information and communication sciences, but also in the humanities and social sciences. There is no doubt that this subject will continue to fascinate researchers for some time to come.

 

Digital Commons, Nicolas Jullien

Digital commons: individual interests to serve a community

In a world where the economy is increasingly service-based, digital commons is key to developing the collaborative economy. At Télécom Bretagne, Nicolas Jullien, an economics researcher, is studying online epistemic communities, creation communities which provide platforms for storing knowledge. He has demonstrated that selfish behaviors may explain the success of collective projects such as Wikipedia.

From material commons to digital commons

Cities of commons, common areas, commons assemblies: commons are shifting from theory to practice, countering the overexploitation of a common resource for the benefit of only a few, and offering optimistic solutions across a wide range of sociological, economic, and ecological fields.

Work carried out since the 1960s by Elinor Ostrom, winner of the 2009 Bank of Sweden’s “Nobel” prize for Economics, has led us to abandon the idea of a single choice between privatization of common goods and management by a public power,” says Nicolas Jullien. “She studied a third institutional alternative where communities collectively manage their commons.” This work first focused on natural or material commons, such as collective management of fisheries, then, with the advent of the Internet, on knowledge commons. The threat of enclosures (expropriating commons participants from their user rights) emerged once again, around the subject of handling copyrights and new digital rights.

Nicolas studies online collective groups whose goal is to produce formalized knowledge, not to be confused with forums, which are communities of practices. “A digital commons is a set of knowledge managed or co-produced by individuals,” explains the researcher, citing Wikipedia, Free, Open Source Software (FLOSS), open hardware, etc. In order for a material commons to function, the number of individuals involved must be limited so that members may organize the commons and choose their rules: entry barriers are thus required. “It’s different with digital commons since apparently there aren’t any entry barriers and everyone can contribute freely,” remarks the researcher, who raises the following questions: Why do individuals agree to take collective action for a digital commons when they are not obliged to do so? How can this collective action be organized?

The digital world does not fundamentally change how people function, nor does it change voluntary commitment, but it does produce a mass effect and a ripple effect, which act as facilitators. It allows more specialized people to meet each other using coordinated information systems managed through Internet bots and artificial intelligence… Finally, because it is not very expensive to produce and make available, the fact that people take advantage of commons without paying is of less importance.

 

Selfishness as a driving force for collective action

Are new theories required to understand digital commons? Not according to the researcher, who states, “Technology is certainly evolving, but human beings still work the same way. Existing theories explain people’s involvement in digital commons rather well, along with the way participants are structured around these commons.

Among these is Marvell & Olliver’s 1993 theory, which established that people weigh cost and opportunity when making a commitment. Collective action is then a gathering of rather selfish individual interests, with varying levels of involvement. And a common denominator is required to remind people about the rules on a higher level. These comprise entrance filters, coordination systems, people who devote themselves to enforcing these rules, and robots who patrol. “If only selfish people were involved, it wouldn’t work,” says Nicolas. Even though interest is purely selfish at the outset, “I’m doing this because I like it, I need to do this, it’s an intellectual challenge, it’s for the good of the world,” individuals soon become interested in adding content to the platforms and following their rules, “and this is the key to how this system works, since the goal is indeed the production of knowledge.”

How selfish are participants in these communities? The researcher and his team, in collaboration with the Université de la Rochelle and the Wikimédia France foundation, studied the social attitudes of Wikipedians, including non-contributing users whose role is often under-estimated. 13,000 Wikipedians were subjected to the “dictator game where you receive a sum of €10 and have to share it with another person. How much of it do you keep?” Following a precise protocol, the game was used to find out if the community produced prosocial behavior, or if individuals already exhibited pro-social preferences that explained their behavior. Two thirds of the participants replied 50/50, well above the usual 40% of people who reply this way when participating in the game. Wikipedians thus have this social sharing norm in common. Better still, whereas earlier studies indicated that Wikipedia contributors had higher-than-average prosocial attitudes for their age, the study did not reveal differences between contributors and non-contributors. “We observed that visiting Wikipedia makes people more likely to demonstrate prosocial (not necessarily altruistic) behaviors.” In other words, compliance with a social norm increases as the sense of a collective develops

 

Economy and digital commons

These studies, at the crossroads between the disciplines of industrial economics and organizational management, will provide engineering programs with food for thought about how organizations manage digital change. The researcher asks a third question: how do digital commons projects impact the economy? First of all, there is an emergence of what, in 1993, the economist Paul Romer referred to “industrial public goods” around which participants seek to position themselves. “I would ideally like to create something like ‘open street map’ without a competitor doing the same thing before I do. So that’s what leads me to work in an open standard,” explains Nicolas. And since there cannot be a multitude of platforms fulfilling the same need, there are increasing yields with adoption: the more time goes by, the more people want to contribute to a certain platform rather than another, and accept its rules even if they do not exactly correspond to what contributors would like.

What we noticed with FLOSS, we’re noticing with Wikipedia too,” remarks the researcher. Individuals are paid to make sure the references are up-to-date. A whole project involving editing, quantification of information, observation, integration of content produced in the ‘above services’ is developed. This verification momentum provided by the community creates the business. Without that, “if the commons stops evolving,” there is no longer any reason to participate.

Many other questions still remain,” continues the researcher, “within national and European research on innovative societies and industrial renewal.” These questions have been explored in the past by the ANR CCCP-Prosodie project and are currently being studied by the ANR common project with the Fing, CAPACITY. How are professions (librarians, for example) changing in response to daily contact with the digital commons? What is learned in these commons, “which are like game guilds“? How are the practical skills developed there related to academic skills? How valuable is this on a résumé? Do companies that hire commons developers buy access to the community? For Nicolas Jullien, all these open-ended questions would deserve to be studied by a digital commons Chair.

Read the blog post The sharing economy: simplifying exchanges? Not necessarily…

 

Nicolas Jullien, Communs numériques, Digital commonsAn associate research professor in economics at Télécom Bretagne, Nicolas Jullien has been working on digital commons since his thesis on free software in 2001. He was coordinator of the Breton M@rsouin cluster for measuring and studying digital uses until 2008, and is head of the Étic (evaluation of ICT measures) research group and of the Master’s in ICT Economics co-accredited by the Université Rennes 1. He was a visiting professor at the Syracuse iSchool (in New York State) in 2011, where he led research on communities of online production and has held an Accreditation to Lead Research since 2013. Guided by a curious mind more than any particular taste for developing theories, this engineer, who is a fan of multiview approaches to understanding complex phenomena, will become a member of the Board of Directors of LEGO, a research laboratory for management economics for western France (UBO/UBS/Télécom Bretagne), in January 2017.

sharing economy

The sharing economy: simplifying exchanges? Not necessarily…

The groundbreaking nature of the sharing economy raises hope as well as concern among established economic players. It is intriguing and exciting. Yet what lies behind this often poorly defined concept? Godefroy Dang Nguyen, a researcher in economics at Télécom Bretagne, gives us an insight into this phenomenon. He co-authored a MOOC on the subject that will be available online next September.

 

To a certain extent, the sharing, or collaborative, economy is a little like quantum physics: everyone has heard of it, but few really know what it is. First and foremost, it has a dual nature, promoting sharing and solidarity, on the one hand, and facilitating profit-making and financial investment, on the other. This ambiguity is not surprising, since the concept behind the sharing economy is far from straightforward. When we asked Godefroy Dang Nguyen, an economist at Télécom Bretagne, to define it, his reaction said it all: a long pause and an amused sigh, followed by… “Not an easy question.” What makes this concept of the collaborative economy so complex is that it takes on many different forms, and cannot be presented as a uniform set of practices.

 

Wikipedia and open innovation: two methods of collaborative production

First of all, let’s look at collaborative production. “In this case, the big question is ‘who does it benefit’?” says Godefroy Dang Nguyen. This question reveals two different situations. The first concerns production carried out by many operators on behalf of one stakeholder, generally private. “Each individual contributes, at their own level, to building something for a company, for example. This is what happens in what we commonly refer to as open innovation,” explains the researcher. The second situation relates to collaborative production that benefits the community: individuals create for themselves, first and foremost. The classic example is Wikipedia.

Although the second production method seems to be more compatible with the sharing concept, it does have some disadvantages, however, such as the “free rider” phenomenon. “This describes individuals who use the goods produced by the community, but do not personally participate in the production,” the economist explains. To take the Wikipedia example, most users are free riders — readers, but not writers. Though this phenomenon has only a small impact on the online encyclopedia’s sustainability, it is not the case for the majority of other community services, which base their production on the balance maintained with consumption.

 

Collaborative consumption: with or without an intermediary?

The free rider can indeed jeopardize a self-organized structure without an intermediary. In this peer-to-peer model, the participants do not set any profit targets. Therefore, the consumption of goods is not sustainable unless everyone agrees to step into the producer’s shoes from time to time, and contribute to the community, thus ensuring its survival. A rigorous set of shared organizational values and principles must therefore be implemented to enable the project to last. Technology could also help to reinforce sharing communities, with the use of blockchains, for example.

Yet these consumption methods are still not as well known as the systems requiring an intermediary, such as Uber, Airbnb and Blablacar. These intermediaries organize the exchanges, and in this model, the collaborative peer-to-peer situation seen in the first example now becomes commercial. “When we observe what’s happening on the ground, we see that what is being developed is primarily a commercial peer-to-peer situation,” explains Godefroy Dang Nguyen. Does this mean that the collaborative peer-to-peer model cannot be organized? “No,” the economist replies, “but it is very complicated to organize exchanges around any model other than the market system. In general, this ends up leading to the re-creation of an economic system. Some people really believe in this, like Michel Bauwens, who champions this alternative organization of production and trade through the collaborative method.

 

La Ruche qui dit Oui!

La Ruche qui dit Oui! is an intermediary that offers farmers and producers a digital platform for local distribution networks. Credits: La Ruche qui dit Oui!

 

A new role: the active consumer

What makes the organizational challenge even greater is that the sharing economy is based on a variable that is very hard to understand: the human being. The individual, referred to in this context as the active consumer, plays a dual role. Blablacar is a very good example of this. The service users become both participants, by offering the use of their cars by other individuals, and consumers, who can also benefit from offers proposed by other individuals — if their car breaks down, for example, or if they don’t want to use it.

Yet it is hard to understand the active consumer’s practices. “The big question is, what is motivating the consumer?” asks Godefroy Dang Nguyen. “There is an aspect involving personal quests for savings or for profits to be made, as well as an altruistic aspect, and sometimes a desire for recognition from peers.” And all of these aspects depend on the personality of each individual, as each person takes ownership of the services in different ways.

There’s no magic formula… But some contexts are more faborable than others.

 

Among the masses… The ideal model?

Considering all the differentiating factors in the practices of the sharing economy, is there one model that is more viable than another? Not really, according to Godefroy Dang Nguyen. The researcher believes “there’s no magic formula: there are always risk factors, luck and talent. But some contexts are more favorable than others.

The success experienced by Uber, Airbnb and Blablacar is not by chance alone. “These stakeholders also have real technical expertise, particularly in the networking algorithms,” the economist adds. Despite the community aspect, these companies are operating in a very hostile environment. Not only is there tough competition in a given sector, with the need to stand out, but they must also make their mark in an environment where new mobile applications and platform could potentially be launched for any activity (boat exchanges, group pet-walking, etc.). To succeed, the service must meet a real need, and find a target audience ready to commit to it.

The sharing economy? Nothing new!

Despite these success factors, more unusual methods also exist, with just as much success — proving there is no ideal model. The leboncoin.fr platform is an excellent example. “It’s a fairly unusual exception: the site does not offer any guarantees, nor is it particularly user-friendly, and yet it is widely used,” observes Godefroy Dang Nguyen. The researcher attributes this to the fact that “leboncoin.fr is more a classified ad site than a true service platform,” which reminds us that digital practices are generally an extension of practices that existed before the Internet.

After all, the sharing economy is a fairly old practice with the idea of “either mutually exchanging services, or mutually sharing tools,” he summarizes. In short, a sharing solution is at the heart of the social life of a local community. “The reason we hear about it a lot today, is that the Internet has multiplied the opportunities offered to individuals,” he adds. This change in scale has led to new intermediaries, who are in turn much bigger. And behind them, a multitude of players are lining up to compete with them.

Read the blog post Digital commons: individual interests to serve a community

 

[box type=”shadow” align=”” class=”” width=””]

Discover the MOOC on “Understanding the sharing economy”

The “Understanding the sharing economy” MOOC was developed by Télécom Bretagne, Télécom École de management and Télécom Saint-Étienne, with La MAIF. It addresses the topics of active consumers, platforms, social changes, and the risks of the collaborative economy.

In addition to the teaching staff, consisting of Godefroy Dang Nguyen, Christine Balagué and Jean Pouly, several experts participated in this MOOC: Anne-Sophie Novel, Philippe Lemoine, Valérie Peugeot, Antonin Léonard and Frédéric Mazzella.

 

[/box]

Conférence numérique franco-allemande

IMT and TUM create the german-french academy for industry of the future

In the framework of the partnership between the French Alliance for Industry of the Future and the German platform Industrie 4.0, Institut Mines-Télécom (IMT) and Technische Universität München (TUM) have presented a plan to create a German-French Academy for Industry of the Future. The project was officially announced on October 27 by the French Minister for the Economy, Industry and Digital Affairs, Emmanuel Macron, as part of the conclusions of the Franco-German conference on digital transformation.

In his summing-up of the Franco-German conference on “accelerating the digital transformation of our economies”, Emmanuel Macron spoke on behalf of both himself and his German counterpart as he announced the creation of the German-French Academy for Industry of the Future, led by Institut Mines-Télécom and TUM. In their concluding speeches, French president François Hollande and German chancellor Angela Merkel both expressed their satisfaction with the foundation of this academy and their aspirations for its future.

Ambitions and objectives of the Academy

The academy has a threefold objective in terms of research and training for industry of the future:

  • Form a cutting-edge research platform serving the fields of digital technology for industry, industrial organization and logistics, and human interfaces.
  • Dovetail the strengths of the partners in order to develop new training programs, create contents (MOOCs) and step up the number of student exchanges.
  • Set up innovative R&D projects in the framework of industrial partnerships working on flagship themes such as automation, flexibility, big data, the internet of things and security, but also logistics and transportation, organization and management, human-robot cooperation and intelligent agents.

More broadly speaking, the Academy will have the remit of conducting forward-thinking discussions and research on the place of humans in the digital and industrial transitions, and of overseeing the emergence of new paradigms for the industry of the future.

A federating initiative

Founded and led by Institut Mines-Télécom and Technische Universität München, both of whom are deeply involved in themes with strong links to industry of the future and corporate partnerships, the Academy will ultimately have a federating role with respect to partners of the alliance, such as Arts et Métiers ParisTech, other French academic partners, German universities of excellence, and the Fraunhofer Institutes.

Responsible lighting: the secrets to a good eco-innovation strategy

On February 11, 2015 an open workshop will be held in Brussels to present the results of the European cycLED project. This research program has supported four SMEs in the lighting sector in their eco-innovation projects aimed at designing more efficient LEDs, from both an economic and an ecological point of view. Cédric Gossart, a researcher at Télécom École de Management, has studied the barriers that hinder eco-innovation in the LED industry, and the ways to overcome them.

 

10 years from now, we will only use LEDs. They are beginning to replace all lighting technology.” The European cycLED project (Cycling resources embedded in systems containing Light Emitting Diodes) was therefore aimed at assessing the opportunities for creating new products and services based on LED technology. With €4 million of funding, as part of the FP7 program, it brings together 13 European organizations, and is led by Fraunhofer IZM. The project’s original approach involved supporting four SMEs in the lighting industry (Braun Lighting, ETAP, ONA and Riva) and helping them to eco-design more environmentally friendly and innovative LEDs that were adapted to their needs.

 

Reducing environmental impacts while creating value

Braun Lighting Solutions needed small urban lighting modules requiring little maintenance and being easy to repair. ETAP wanted to develop an LED with a long service life that could withstand extreme conditions: for example, corrosion due to exhaust gases in parking lots. ONA wanted a product that would be almost completely recyclable, with components that could be reused. Finally, Riva developed LEDs for warehouse lighting and a new business model: selling a lighting service rather than simply selling lighting products. “The environmental benefit,” Cédric Gossart explains, “is that this encourages the company to make its lamps last as long as possible. It’s a way of combating obsolescence.

Although they pollute less than older light bulbs, LEDs are still not perfect. They contain rare and dangerous metals that are difficult to recycle. “Currently, if you want to recycle the indium and gallium in LEDs, you have to choose to recover one or the other. One of the partners, Umicore, is working on designing a way to separate them. At the start of the project, we didn’t even know if this was possible,” explains Cédric Gossart. This would both reduce the environmental impact and reduce the risk of shortages of these raw materials.

Social science researchers helped the SMEs to ensure their innovations would be viable and to develop true innovation strategies. Three European research institutes participated in the project. Ecodesign Centre in Wales (United Kingdom) drafted recommendations for managing rare resources throughout the LED product life cycle. Sirris, the Belgian collective center for the technology industry, worked on business models applied to eco-innovation. Finally, Cédric Gossart from Télécom École de management, worked with his team (KIND) to analyze obstacles that hinder the development of eco-innovation in Europe, and sought solutions to overcome these obstacles.

 

Overcoming the barriers to eco-innovation

Eco-innovation allows new markets to be created and improves a company’s image, while also motivating employees and attracting talent from outside the company, because it meets a social need and reduces the environmental impacts.” Yet, despite these advantages, many barriers hinder companies that would like to adopt this approach. All in all, Cédric Gossart and his team of researchers listed and classified 144 obstacles, which are not specific to the lighting industry. “We then asked the four SMEs to assess them. 60% were deemed irrelevant.” The others were thoroughly analyzed, and the consortium then worked to provide solutions.

The main obstacle for companies is technological. It concerns the choice of the “driver”: the equipment that powers and controls the LED. “Although an LED can last over 10 years*, the driver is generally only guaranteed between three and five years, and can last an even shorter time. The drivers’ fragility constitutes one of the reasons for the rapid decline of an LED’s performance, contributing to the poor reputation of LEDs when they were first rolled out.” Certain SMEs have therefore decided to produce their own drivers in-house, in order to obtain high-performance LEDs. Others have chosen to train their staff to identify a good driver. “We sought solutions to help the SMEs with the problems they could not solve on their own.” The cycLED consortium therefore recruited the help of the Lighting Europe association in order to implement procedures for verifying the certification of the lighting products. As a result, on January 7, 2015, the association called for the reinforcement of pan-European cooperation and improved market surveillance.

This analysis also revealed new barriers hindering eco-innovation. “We realized that one of the tools aimed at supporting innovation – the patent – could in fact hinder it. LEDs are complex technological objects, and designing them requires the integration of several fields of knowledge, leading to inventions that are patented by competing companies.” It is therefore impossible to design a more innovative LED without getting involved in intellectual property disputes. To overcome this barrier, Cédric Gossart favors “a more open knowledge protection system.

 

A workshop on understanding how to eco-innovate

Today, the cycLED project is entering a new phase. On February 11, 2015, an open workshop will be held in Brussels, open to all those involved in the lighting industry, as well as any companies interested in eco-innovation. The SMEs will present the results from the project – the LEDs they eco-designed – and will explain how they got started with this approach. “If it weren’t for this European project, these four SMEs would probably not have adopted this eco-innovation approach. Now they all intend to do more.” The idea is for these four experiments, and the tools developed by the researchers, to help other companies, including those from other sectors. This is the case for the obstacle analysis carried out by Cédric Gossart: “Because the project was aimed at helping the entire European lighting industry to adopt an eco-innovation approach, we are now expanding this eco-innovation obstacle analysis to include other companies in the sector via an online questionnaire.” With the secret hope of one day witnessing the creation of the ultimate eco-designed LED: a zero impact LED that is completely recyclable, and designed according to the cradle-to-cradle method…

* Or 30,000 hours: at 10 hours per day, every day, this equals a minimum of 10 years, or more if it is used less and the heat is properly dispersed.