Carbon Tax, Taxe carbone, Fabrice Flipo

Debate: carbon tax, an optical illusion

Fabrice Flipo, Télécom École de Management – Institut Mines-Télécom

[divider style=”normal” top=”20″ bottom=”20″]

[dropcap]T[/dropcap]he climate is warming and changing: we must act. On the French version of the website The Conversation, climate change specialist Christian de Perthuis recently applauded the introduction of the carbon tax in France in 2014 and its gradual increase.

Referring to the “Quinet value” (around 90 euros), the author suggests that reaching this price would “guarantee” that greenhouse gas emissions would decrease four-fold. When they accumulate in the atmosphere, these emissions disrupt the climate’s balance.

France therefore would appear to be on the right path, seemingly part of the “narrow circle” of model students. Sweden, with its carbon tax of 110 euros, is cited as an example. Inequalities in terms of access to energy are offset by the energy voucher benefit. There’s only one hitch: the carbon tax is very unpopular.

Dependence on fossil fuels

This is an appealing example, but in fact it is deceiving.

First, we must remember that Sweden depends less on fossil fuels than France, representing approximately 30% of its primary energy balance against 50% in France. And France is much more dependent than the figures show.

We must keep in mind that “primary energy” refers to the energy produced within a country, to which we must add imported energy. This amount is different from the “final energy”, which refers to the energy charged to the consumer. Between these two categories is where the losses are hidden. However, there is a 70% difference between primary and final energy for nuclear energy, which is crucial in the French energy mix.

In the light of these figures, we understand why it is easier for Sweden to combat these greenhouse gas emissions.

Also, the transition Sweden has achieved over the past decades did not only rely solely on the carbon tax. While it is true that the country went from 70% of fossils in the 1970s to approximately 30% in its primary energy balance, this was due to a set of combined measures, including grants and the development of local channels; but also due to a serious economic crisis in the 1990s.

This led to devaluation, which resulted in a considerable increase in the price of fossil fuels, which brought about significant and lasting changes in the local market conditions. Finally, Sweden developed electricity production primarily based on hydropower and nuclear energy. And the “decarbonization” was also brought about by the strong development of district heating networks and significant incentive policies for renewable energies.

The tax is must therefore be understood as part of a clear and comprehensive strategy, based on strong political support, which is not the case in France. On June 10, 2016, the five main political parties in Sweden concluded an agreement on the country’s energy policy.

One of the specific objectives was that Sweden would no longer emit greenhouse gases by 2045, and that the national electrical production would rely entirely on renewable energies as early as 2040. Measured in primary energy, Sweden already relies on 36% renewable energy: this is much more than France, which has stagnated in this area for the past 30 years (13 Mtoe in 1970 on a balance of 170 Mtoe, 24 on a balance of 260 Mtoe in 2016).

There is nothing to show that the tax played the role attributed to it by Christian de Perthuis, quite the contrary. We can also note that energy policy is a topic that is regularly absent from French public debate.

Outsourced emissions

Yet in Sweden, things are not as rosy (or green!) as they seem. A significant part of the country’s economy is now devoted to the tertiary (services) sector, and the share of the industrial sector has declined, as in France. Services account for 72% of GDP and 80% of the labor force. These figures are comparable to those in France (80% and 76% respectively).

This means that Stockholm is increasingly buying the goods consumed on its territory, like its electronic products, elsewhere. Therefore, Sweden outsources the greenhouse gas emissions related to the manufacturing of these products. In 2008, WWF estimated that 17% of emissions should be added to Sweden’s balance to obtain the net total of its emissions. This is still better than France, which must add 40%.

For all its talk, France is not a leader in climate issues.

Paris and much of French media prefer to criticize German coal for its harmful emissions, despite Germany having reduced these emissions by 25% since 1990. While many commentators are focusing on Germany’s failures (perhaps to better excuse our own failures?), we must remember that German emissions have dropped from 1041 MtCO₂ in 2000 to 901 in 2015.

Germany, with its less service-based economy, draws its fuels from its own soil; therefore, it does not outsource its emissions. Although German emissions stopped declining in 2009, their decarbonization policy remains unchanged; it has simply introduced the additional constraint of phasing out nuclear power. And it is doubtful that Germany will succeed in doing what also appears impossible in France (and elsewhere): reconciling infinite growth with respect for the planet.

French companies have bad report cards

All the countries in the world cannot continue to pass their emissions on to their neighbors. Therefore, the decoupling of economic growth from greenhouse gas emissions mentioned by Christian de Perthuis in his article certainly appears to be an illusion.

In reality, France is the 7th largest global contributor to climate change. This is explained by the mass of cumulative emissions over a long period of time, keeping in mind that what causes global warming is not the annual emissions of greenhouse gas, but rather their gradual accumulation over the past 150 years.

The warming is a function of the concentration of greenhouse gases in the atmosphere, and thus of the overall carbon stock there. Although France now emits relatively little in comparison, it has emitted large amounts in the past. We would expect France to set an example, as it committed to do under the Kyoto Protocol.

We must emphasize that the policies of French multinationals are very far from respecting the recent commitments of the Paris Agreement to combat climate change: these policies lead to a rise of 5.5 °C. Finally, Sweden and France are taking risks in committing to nuclear energy, despite the statistical certainty of an accident revealed by the revised calculation methods following the Fukushima disaster.

A political issue

Distributing a voucher for 150 or 200 euros to help the poorest individuals cope with their dependency on fossil fuels is indecent in a context of increased wealth among the wealthiest individuals and the stagnation of low wages in developed countries. The most affluent will continue to consume more and more, thus emitting more and more greenhouse gases.

Finally, there is the question of how Christian de Perthuis’ article can define a “good policy” based solely on economic analysis, without including a multi-disciplinary perspective and a public consultation process.

For a few decades now, a number of economists have stubbornly continued to propose this unpopular carbon tax, which is profoundly unequal, since the tax is similar to the VAT. It affects all budgets indiscriminately.

The wealthiest individuals and high-value added activities, which “drive” growth, will pay the tax without even noticing, and will continue to feed the machine, consuming more and more; while those with the lowest incomes, or those for whom fuel represents a significant portion of their budgets, will be greatly impacted. None of this makes any political sense.

In budgets, the priority continues to be given to roads and cars (+16% in 1990-2014 compared to -14% for rail tracks). And yet we want to penalize those who use them? Do we really want to save the climate?

The right approach lies in popular mobilization and creative energies; this is a political problem, not only an economic one. By ignoring this, we avoid putting the real obstacles on the table, such as market balances between major operators (who want to keep their shares, arguing that they need time to change) and innovators that are struggling to make a name for themselves.

 

Fabrice Flipo, Professor of social and political philosophy, epistemology and the history of science and technology, Télécom École de Management – Institut Mines-Télécom

The original version of this article was published on The Conversation.

ISN, Digital social innovations, innovations sociales numériques

Can digital social innovations tackle big challenges?

Müge Ozman, Télécom École de Management – Institut Mines-Télécom

[divider style=”normal” top=”20″ bottom=”20″]

[dropcap]D[/dropcap]igital social innovations (DSI) are booming in Europe, empowering people to solve problems in areas as diverse as social inclusion, health, democracy, education, migration and sustainability. Examples include civic tech, neighbourhood-regeneration platforms, collaborative map-making, civic crowdfunding, peer-to-peer education and online time banks. A wide range of organisations support DSIs, through offering consultancy services, network access, funding, resources and skills.

The UK-based NESTA is one of the central think tanks in the field, as well as the coordinator of the EU-funded project DSI4EU. At the European level, different schemes exist to support social innovations and also DSIs, such as the Social Innovation Competition, whose 6th round took place in Paris on March 20 this year. Many events, festivals and conferences are also being organised, such as the Social Good Week in Paris or the Ouishare Fest, which was born in France and is now an international event.

While significant time, effort, and resources are spent on these activities, there are some obstacles to their development and efficacy in tackling the big challenges of our times, which seem necessary to address.

1. Questioning openness

Many DSIs emphasise participation and transparency, but the use of open-source software remains limited, at least in France. The openness of a platform is an important indicators about its capacity to encourage participation, by decentralising power, enabling others to access, replicate, and build upon the source code. Proprietary software, on the other hand, raises questions about the extent to which it is being manipulated by the innovator. Valentin Chaput, the editor of the site Open Source Politics states: “When we do not master its code, it is the authors of this code who control us”.

2. What happens to user data?

Social entrepreneurs often struggle to build sustainable business models that will ensure their autonomy and independence. There exist different business models through which DSIs generate income. One of these is the commercialisation of user data. Here, the main problem is not commercialisation per se (although to prevent it would be preferred), but how the background business model is communicated with the users.

To have information, users need to read in detail the platform’s “terms of use”, which are often not communicated by an attractive design. As a consequence, users can easily skip this part, due to ignorance or lack of interest. Platforms should be more transparent about their business models, and communicate these with the audience in a user-friendly way. This will also reduce some users’ hesitations in involvement, caused by a lack of trust.

3. Systemic change or short-term relief?

There is also a deeper concern about the sharing economy. Evgeny Morozov, author of Net Delusion: The Dark Side of Internet Freedom, wrote, “it’s like handing everybody earplugs to deal with intolerable street noise instead of doing something about the noise itself”. Sometimes this is also valid for DSI. How can innovations that can bring a systemic change be distinguished from system-enhancing ones? It is not meaningful to categorise platforms as systemic ones and others, as there are different shades of grey between purely black or white.

But there is some scope for thinking deeper, by observing the activities of platforms. For example, Humaid is a crowdfunding platform in which people with disabilities or their caregivers can raise money to purchase necessary assistive technologies. In so doing, Humaid in a sense reproduces exclusionary practices in the society by taking people with disabilities as objects of charity, rather than as individuals with rights and freedoms, as outlined in the UN Convention on the Rights of Persons with Disabilities.

Another example is from the sharing economy. Karos, a car-sharing platform launched a year ago, provides the option of “ladies only” car sharing. In doing so, doesn’t Karos reproduce existing practices that give rise to inequalities in the first place? Rather than using information and communication technologies (ICTs) to alleviate inequalities embedded in societies, such initiatives enhance existing norms and exclusionary barriers. Addressing big challenges require awareness raising and educational activities around rights and freedoms.

Karos

4. The struggle of traditional civil-society organisations

Established civil-society organisations that have field-specific experience with targeted populations, and who are involved in social movements and awareness raising activities can have an important role in systemic change, but most of them find themselves in a vulnerable position faced with digital platforms. For example, some are facing competition from start-ups that build resources and finances from the digital sector. Digital competences of the new economy and traditional associations’ field-specific experiences should find spaces of synergy building. But there are barriers to the successful building-up of such spaces, sometimes due to polarised ideological worlds between non-profits and organisations of the digital economy.

5. Under-engagement of users

There is also the important issue of attracting users to these platforms. Most DSI platforms rely on civic engagement, which could be for volunteering, providing skills, information, services, goods, opinions. At the same time, the online world is likely to reflect the economic, social and cultural relationships in the offline world – a research paper by Alexander Van Deursen and Jan Van Dijk of the University of Twente sheds light on this question.

This suggests that the DSI users could be those who are already active in civic life in the offline world, as indicated by the research of Marta Cantijoch, Silvia Galandini, and Rachel Gibson. If this is the case, DSIs can strengthen existing divides instead of alleviating them. To be able to develop effective and informed policies, more research about the nature of users, their engagement patterns in different platforms are needed, but there are obstacles on the way; most important is the lack of data.

6. Lack of data in a world of ‘big data’

The lack of data on users and the ecosystem are serious barriers to carry out research on DSIs and their potential to address big challenges. Platforms do not share data due to privacy and confidentiality reasons. Or, as in the case of France, regulations about data collection can prevent research about the users of DSI. At the national and EU levels, initiatives to collect and standardise data are much needed, so that researchers can have access to essential data about the use of and participation in DSI. This is also important to carry out research on the specific capabilities of different EU countries on DSI and develop means to transfer good practices and make use of potential synergies.

7. Fascination with (rapid) impact measurement

For investors, funders, and social entrepreneurs, social impact measurement is essential. But this can be problematic, complex and difficult issue. What’s more, it is important to remember a quote from William Bruce Cameron: “Not everything that can be counted counts. Not everything that counts can be counted”. In addition, sometimes time pressures result in employing vague and ineffective means to measure impact that lack a deep understanding of the returns. Amount of funds raised, growth in the number of participants, number of supported projects, and so on, are often used as indicators of success, but such statistics are problematic.

For instance, participants of a platform are often “dormant”, meaning they register but do not use the platform later on. It is necessary to change the way “social impact” is understood by policy makers and investors to distinguish what needs to be measured and what not, and if measurement is a must the focus should be on tangible changes that the platform brings. For example, which regulations have changed as a result of platform activities? Which medical research results are obtained by patient-doctor platforms? Which civic projects are realised, and what are potential benefits? Social indicators should focus on a deeper understanding of how the actual social practices that give rise to social problems are tackled, and what the role of platforms are in this process.

8. Innovation (un)readiness of population

While most of the policy focus is on supporting the generation of innovations, the innovation readiness of the user population is not given enough attention. Investments in developing Internet skills are of crucial importance, which include operational, formal and strategic skills. The research of Alexander Van Deursen and Jan Van Dijk provides insight on this question.

In addition, potential users can be unaware, uninterested, or unconnected even if they have a benefit to gain. Paradoxically, those who are most likely to benefit from DSI are more likely to be unaware, uninterested, or unconnected. Instead of being confined to the online sphere, social entrepreneurs should work actively with target populations in the field, in developing solutions and encouraging participation. As Tom Saunders of NESTA states, it is important to “remember that there’s a world beyond the Internet”. For example, the city of Amsterdam is remarkable in efforts to integrate the people in the collaborative economy.

9. Duplication, duplication, duplication

Most digital platforms operate according to the logic of network externalities, also called as multi-sided platforms. This means that the existence of one group of users in a platform makes it more attractive for other groups to join. In this way, certain digital platforms build up their user base rapidly and become dominant players. While this can be problematic in terms of building up of monopolistic power, too many start-ups in the same field is also problematic, which is the case today in some areas of DSI.

For example, there are more than 20 civic-tech platforms with similar functions in France. The potential gains and losses in terms of social welfare and efficiency should be understood and evaluated better in the case of DSI. Many of these platforms struggle to grow, their user base is divided, and finally they close down within a few years of launching. One solution can be to allow for sharing reputation, or other information about users between platforms, which helps in sustaining diversity, while avoiding centralisation of power.

10. Lack of cross-fertilisation

The importance of the above problems also depends on the field of activity and type of DSI considered, as there are many different types of DSIs. Aggregating all DSIs in a single group may be misleading. At the same time it is precisely this diversity that gives this emerging ecosystem its dynamism and resilience. Unfortunately, this diversity is not made use of in an effective way. Instead, field-specific bubbles have formed with weak interactions between them. Cross-fertilisation and synergies between these are potentially important to increase resilience, but networks rest weak. A recent initiative in France is Plateformes en Communs, which aims to form a common platform of cooperatives and associations in diverse domains of activity, so as to leverage synergies between them.

Given the high level of penetration of digital technologies in our lives, digital social innovations promise to address big challenges, yet for there to be better outcomes, more needs to be done. Participation to civic life – online or offline – is always valuable in an increasingly problematic world. Digital platforms make this participation much easier. As the saying goes, little drops of water make a mighty ocean.

[divider style=”dotted” top=”20″ bottom=”20″]

The ConversationDSI4EU, Muge Ozman and Cedric Gossart are organising a special stream on digital social innovations in the 10th International Social Innovation Conference, which will take place in Heidelberg, in September 2018.

Müge Ozman, Professor of Management, Télécom École de Management – Institut Mines-Télécom

The original version of this article was published on The Conversation.

smart cameras, safe city

Coming soon: “smart” cameras and citizens improving urban safety

Flavien Bazenet, Institut Mines-Telecom Business School, (IMT) and Gabriel Périès, Institut Mines-Telecom Business School, (IMT)

This article was written based on the research Augustin de la Ferrière carried out during his “Grande École” training at Institut Mines-Telecom Business School (IMT).

[divider style=”normal” top=”20″ bottom=”20″]

[dropcap]« S[/dropcap]afe cities »: seen by some as increasing the security and resilience of cities, others see it as an instance of ICTs (Information and Communication Technologies) being used in the move towards the society of control. The term has sparked much debate. Still, through balanced policies, the “Safe City” could become a part of a comprehensive “smart city” approach. Citizen crowdsourcing (security by citizens) and video analytics—“situational analysis that involves identifying events, attributes or behavior patterns to improve the coordination of resources and reduce investigation time” (source: IBM)—ensure the protection of privacy, and guarantee its cost and performance.

 

Safe cities and video protection

A “safe city” refers to NICT (New Information and Communication Technology) used for urban security purposes. However, in reality, the term is primarily linked to a marketing concept that major groups integrating the security sector have used to promote their video protection systems.

First appearing in the United Kingdom in the mid-1980s, urban cameras gradually became popularized. While their use is sometimes a subject of debate, in general they are well accepted by citizens, although this acceptance varies based on each country’s risk culture and approach to security matters. Today, nearly 250 million video protection systems are used throughout the world. On an international scale, this translates as one camera for every 30 inhabitants. But the effectiveness of these cameras is often called into question. It is therefore necessary to take a closer look at their role and actual effectiveness.

According to several French reports—in particular the “Report on the effectiveness of video protection by the French Ministry of the Interior, Overseas France and Territorial Communities” (2010) and ”Public policies on video protection: a look at the results” by INHESJ (2015)—the systems appear to be effective primarily in deterring minor criminal offences, reducing urban decay and improving interdepartmental cooperation in investigations.

 

The effectiveness of video protection limited by technical constraints

On the other hand, video protection has proven completely ineffective in preventing serious offences. The cameras appear only to be effective in confined spaces, and could even have a “publicity effect” for terrorist attacks. These characteristics have been confirmed by analysts in the sector, and are regularly emphasized by Tanguy Le Goff and Eric Heilmann, researchers and experts on this topic.

They also point out that our expectations for these systems are too high, and stress that the technical constraints are too significant, in addition to the excessive installation and maintenance costs.

To better explain the deficiencies in this kind of system, we must understand that in a remotely monitored city, a camera is constantly filming the city streets. It is connected to the “Urban monitoring center”, where the signal is transmitted to several screens. The images are then interpreted by one or more operators. But no human can be legitimately expected to remain concentrated on a multitude of screens for hours at time, especially when the operator-to-screen ratio is often extremely disproportional. In France, the ratio sometimes reaches one operator to one hundred screens! This is why the typical video protection system’s capacity for prevention is virtually nonexistent.

The technical experts imply that the real hope for video protection through forensic science—the ability to provide evidence—is nullified by the obvious technical constraints.

In a “typical” video protection system, the volume of data recorded by each camera is quite significant. According to one manufacturer’s (Axis Communications) estimate, with a camera capable of recording 24 images per second, the generated data ranges from 0.74 Go/hour to 5Go/hour depending on the encoding and chosen resolution. Therefore, the servers are quickly saturated, since current storage capabilities are limited.

With an average cost of approximately 50 euros per terabyte, local authorities and town halls find it difficult to afford datacenters capable of saving video recordings for a sufficient length of time. In France, the CNIL authorizes 30 days of saved video recordings, but in reality, these recordings are rarely saved for more than 7 consecutive days. For some experts, often these saved are not kept for more than 48 hours. Therefore, this undermines the main argument used in favor of video protection: the ability to provide evidence.

 

A move towards new smart video protection systems?

The only viable alternative to the “traditional” video protection system is that of “smart” video protection using video analytics or “VSI”: technology that uses algorithms and pixel analysis.

Since these cameras are generally supported by citizens, they must become more efficient, and not lead to a waste of financial and human resources. “Smart” cameras therefore offer two possibilities: biometric identification and situational analysis. These two components should enable the activation of automatic alarms for operators so that they can take action, which would mean the cameras would truly be used for prevention.

A massive installation of biometric identification is currently nearly impossible in France, since the CNIL is committed to the principles of purpose and proportionality: it is illegal to associate recorded data featuring citizens’ faces without first establishing a precise purpose for the use of this data. The Senate is currently studying this issue.

 

Smart video protection, safeguarding identity and personal data?

On the other hand, situational analysis offers an alternative that can tap into the full potential of video protection cameras. Through the analysis of situations, objects and behavior, real-time alerts are sent to video protection operators, a feature that restores hope in the system’s prevention capacity. This is in fact the logic behind the very controversial European surveillance project, INDECT: limit the recording of video, to focus only on pertinent information and automated alerts. This technology therefore makes it possible to opt for selective video recording, and even do away with it all together.

“Always being watched”… Here, in Bucharest (Romania), end of 2016. J. Stimp/Flickr, CC BY

VSI with situational analysis could offer some benefits for society, in terms of the effective security measures and the cost of deployment for taxpayers. VSI requires fewer operators than video protection, fewer cameras and fewer costly storage spaces. Referring to the common definition of a “smart city”—realistic interpretation of events, optimization of technical resources, more adaptive and resilient cities—this video protection approach would put “Safe Cities” at the heart of the smart city approach.

Nevertheless, several risks of abuse and potential errors exist, such as unwarranted alerts being generated, and they raise questions about the implementation of such measures.

 

Citizen crowdsourcing and bottom-up security approaches

The second characteristic of a “smart and safe city” must take people into account, citizens users—the city’s driving force. Security crowdsourcing is a phenomenon that finds its applications in our hyperconnected world through “ubiquitous” technology (smartphones, connected objects). The Boston Marathon bombing (2013), the London riots (2011), the Paris attacks (2015), and various natural catastrophes showed that citizens are not necessarily dependent on central governments, and could ensure their own security, or at least work together with the police and rescue services.

Social networks, Twitter, and Facebook with its “Safety Check” feature, are the main examples of this change. Similar applications quickly proliferated, such as Qwidam, SpotCrime, HeroPolis, and MyKeeper, and are breaking into the protection sector. On the other hand, these mobile solutions are struggling to take any ground in France due to a fear of false information being spread. Yet these initiatives offer true alternatives and should be studied and even encouraged. Without responsible citizens, there can be no resilient cities.

A study from 2016 shows that citizens are likely to use these emergency measures on their smartphones, and that they would make them feel safer.

Since the “smart city” relies on citizen, adaptive and ubiquitous intelligence, it is in our mutual interest to learn from bottom-up governance methods, in which information comes directly from the ground, so that a safe city could finally become a real component of the smart city approach.

 

Conclusion

Implementing major urban security projects without considering the issues involved in video protection and citizen intelligence leads to a waste of the public sector’s human and financial resources. The use of intelligent measures and the implementation of a citizen security policy would therefore help to create a balanced urbanization policy, a policy for safe and smart cities.

[divider style=”normal” top=”20″ bottom=”20″]

Flavien Bazenet, Associate professor for Entrepreneurship and Innovation at Institut Mines-Telecom Business School, (IMT) and Gabriel Périès, Professor, Department of Foreign languages and Humanities at Institut Mines-Telecom Business School, (IMT)

The original version of this article (in French) was published in The Conversation.

bitcoin, Patrick Waelbroeck

Bitcoin: the economic issues at stake

Patrick Waelbroeck, Institut Mines-Télécom (IMT)

[divider style=”normal” top=”20″ bottom=”20″]

[dropcap]C[/dropcap]ryptocurrencies like Bitcoin only have value if all the participants in the monetary system view it to as currency. It must therefore be rare, in the sense that it must not be easily copied (a problem equivalent to counterfeit banknotes for traditional currencies).

This is a requirement that is met by the Bitcoin network, which ensures no double-spending occurs. In addition to the value linked to the acceptance of the currency, Bitcoins owes its value to a variety of economic mechanisms linked to the analysis of the Bitcoins’ supply and demand.

Bitcoin supply

The issuance of currency in the primary market

The creation of Bitcoins is determined by the mining process. Each block that is mined generates Bitcoins. Their design stipulates that the amount per mined block be divided by 2 for every 210,000 blocks, to obtain a total amount of Bitcoins in circulation of (excluding those that are lost) 21 million. This monetary rule is monitored by the Bitcoin Foundation consortium, as we will discuss later in this article. The monetary rule can therefore be modified to respond to fluctuating market conditions, which can result in a hard fork.

Electricity is the main component (over 90% according to current estimates) of a mining farm’s total costs. In 2015, Böhme et al. (2015) assessed the Bitcoin network’s consumption at over 173 megawatts of electricity on a continuous basis. This represented approximately 20% of a nuclear power plant’s production and amounted to 178 million dollars per year (based on residential electricity prices in the United States). This amount may seem high, but Pierre Noizat considers that it is not any more than the annual electricity cost for the global network of ATMs (automatic teller machines), estimated at 400 megawatts. Once we figure in the costs involved in manufacturing and putting currency and bank cards into circulation, we see that the Bitcoin network’s electricity cost is not as high as it seems.

However, this cost may significantly increase as the network continues to develop, due to a negative externality inherent in mining: each miner that invests in new material increases his or her marginal revenue, but at the same time increases the overall mining cost, since the difficulty increases with the number of miners and their computation capacity (hash power).

La quête du bitcoin. xlowmiller/VisualHunt

Therefore, for the Bitcoin network, the difficulty of the cryptography problem that must be solved and approved by a proof-of-work consensus increases along with the network’s overall hash power. There is therefore a risk of over-investing in the mining capacity, since individual miners do not consider the negative effect on the entire network.

It is important to note that increasing the mining difficulty reduces mining incentives and increases the verification time, and thus the efficiency of the blockchain itself. This mechanism brings to mind the tragedy of the commons, in which shared resources (here, hash power) are depleted and only maintained by a handful of farms and pools, thereby nullifying the very principle of the public blockchain, which is decentralized.

There is therefore a risk that mining capacities will become greatly concentrated in the hands of a small group of players, thus invalidating the very principle of the blockchain. This trend is already visible today.

In the end, the supply of Bitcoins, and therefore the monetary creation on the primary market, depend on the cost of electricity and the difficulty associated with the mining process, as well as the governance rules pertaining to the Bitcoin price generated by a mined block.

The Bitcoins value on the secondary market

The Bitcoin can also be bought and sold on an exchange platform. In this case, the Bitcoin’s value is similar to a financial investment in which the financial players anticipate the prospect of financial gain and factors that could cause the Bitcoin to appreciate.

Bitcoin demand

The demand for cryptocurrency depends on several user concerns that are addressed below, starting with the positive factors and ending with the risks.

Financial privacy

Bitcoin accepted here. jurvetson on Visual Hunt

Governments are increasingly limiting the use of cash to demonstrate their efforts to counter money-laundering and the development of black markets. Cash is the only means of payment that is 100% anonymous. Bitcoin and other cryptocurrencies come in second, since the pseudonymous system used by Bitcoin effectively conceals the identity of the individuals making the transactions. Furthermore, other cryptocurrencies, such as the Zcash, go a step further, masking all the metadata linked to a transaction.

Why do people want to use an anonymous payment method? For several reasons.

First of all, this type of payment method prevents users from leaving any traces that could be used for monitoring purposes by the government, employers, and certain companies (especially banks and insurance companies). Companies and banks use price discrimination practices that can sometimes work against consumers. Leaving traces through payment can also cause companies to further incite customers to take advantage of new commercial offers and engage in targeted advertising that some see as a nuisance.

Secondly, paying with an anonymous payment method limits “sousveillance” (or inverse-surveillance) by close friends and family. Like when a payment is made using a joint account.

Thirdly, making payment under a pseudonym makes it possible to maintain business confidentiality.

Fourthly, just like the privacy policy, anonymity in certain transactions (for example healthcare products or hospital visits) helps build trust in society, and is therefore of economic value. Therefore, by enabling pseudonymity, Bitcoin brings added value in these various instances.

The Bitcoin works in times of crisis, thus avoiding capital controls

The Bitcoin emerged right after the financial crisis of 2008. This period witnessed the power of governments and central banks to control cash withdrawals and outstanding capital stock. There are very few means available for avoiding these two institutional constraints. The Bitcoin in one such means. Even if cash withdrawals are prohibited, Bitcoin owners can still pay using their private key.

The Bitcoin imposes discipline on governments

The Bitcoin (and the same is true for other cryptocurrencies) can be considered as a monetary alternative that is not controlled by a central bank. Some economists, like F. Hayek, sees these alternative currencies that compete with the official currency as a means of imposing discipline on governments that might be tempted to use inflation to finance their debt. If this happens, consumers and investors would no longer use the official currency, and would instead purchase the alternative currency, creating a deflationary pressure on the official currency.

Security-related network externalities

The level of security increases with the number of network nodes, since each node increases the computation power required to create a breach in the Blockchain security (through a 51% attack, double-spending, or denial of service–DOS). Furthermore, a DOS attack is especially hard to stage, since it is so difficult to determine who the recipient is. Positive network externalities therefore exist: Bitcoin’s value increases with the number of nodes participating in the network.

Indirect network externalities related to payment method

Bitcoin is a payment method, just like cash, debit cards and Visa/Mastercard/American Express cards. Bitcoin can therefore be understood using the multi-sided market theory, which models situations where two groups of economic players benefit from positive crossed externalities. The consumer who chooses a payment method for a purchase is happy when it is accepted by the merchant. In the same way, merchants are eager to accept a payment method that customers possess. Consequently, the dynamics of multi-sided markets result in virtuous cycles that can experience a slow inception phase, followed by a very fast deployment phase. If the Bitcoin experienced this type of phase, its value would enter a period of acceleration.

A Bitcoin bubble? duncan on Visual Hunt, CC BY-NC

The risks

Among the factors that reduce the demand for Bitcoins, the most prominent are the risks related to rules and regulations. On the one hand, a State could order that the capital gains generated from buying and selling Bitcoins be declared. On the other hand, Bitcoins can be used in regulated sectors (like the insurance and bank sectors) and their use could therefore be regulated as well. Finally, there is always the risk of losing the data on the hard drive where the private key is stored, resulting in the loss of the associated Bitcoins, or a State could force access to private keys for security reasons.

However, the greatest risk involves the governance of the Bitcoin network.

In the event of a disagreement on how the communication protocol should develop, there is a risk that the network could split into several networks (hard fork) with currencies that would be incompatible with each other. The most important issue involves the choice of the consensus rule for validating new blocks. A consensus must be reached on this consensus, which the technology itself appears unable to provide.

Conclusion

The Bitcoin’s economic value depends on many positive economic factors that could propel the cryptocurrency into a period of sustained growth, which would justify the current surge in its prices in the exchange markets. However, the risks related to the network’s governance must not be overlooked, since trust in this new currency depends on it.

Patrick Waelbroeck, Professor of Economics at Télécom ParisTech, Institut Mines-Télécom (IMT)

The original version of this article (in French) was published on The Conversation.

Also read on I’MTech:

 

Auragen

The Auragen project: turning the hopes of genomic medicine into reality

There is only one way to unlock the mysteries of certain genetic diseases — analyze each patient gene by gene. Genome analysis offers great promise for understanding rare diseases and providing personalized treatment for each patient. The French government hopes to make this new form of medicine available through its healthcare system by 2025. To achieve this aim, institutional healthcare stakeholders have joined forces to develop gene sequencing platforms. One such platform, named Auragen, will be established in the Auvergne Rhône-Alpes region. Mines Saint-Étienne is one of the partners in the project. Vincent Augusto, an industrial engineering researcher at the Saint-Etienne school, explains Auragen’s objectives and how he is involved in creating the platform.  

 

What is the purpose of genomic medicine?

Vincent Augusto: Certain diseases are caused by modifications of the genetic code, which are still not understood. This is true for many forms of cancer or other rare pathologies. In order to treat patients with these diseases, we must understand genetic alterations and be able to determine how these genes are different from those of a healthy individual. The goal of genomics is therefore to sequence a patient’s genome, the entire set of genes, in order to understand his disease and provide personalized diagnosis and treatment.

 

Is genomic medicine a recent idea?

VA: Gene sequencing has existed for 40 years, but it was very costly and could take up to several months to determine the entire genome of a living being. Thanks to advances in technology, a human being’s genome can now be sequenced in just a few hours. The main limitation to developing genomic medicine is an economic one, some startups offer sequencing for several thousand euros. But in order to make this service available to patients through the healthcare system, the processes must be industrialized in order to bring down the cost. And this is precisely what the Auragen project aims to do.

 

What is the Auragen project?

VA: It is part of the France Genomic Medicine 2025 Plan launched in 2016 with the aim of developing genomics in the French healthcare system. The Auragen project strives to create one of the two sequencing platforms in France in the Auvergne Rhône-Alpes region (the other platform, SeqOIA is located in the Ile-de-France region). To do so, it has brought together the University Hospital Centers of Lyon, Lyon, Grenoble, Saint-Étienne and Clermont-Ferrand, two cancer centers and research centers including Mines Saint-Étienne. The goal is to create a platform that provides the most efficient way to sequence and centralize samples and send them to doctors, as quickly and inexpensively as possible.

 

How are you contributing to the project?

VA: At Mines Saint-Étienne, we are involved in the organizational assessment of the platform. Our role is to model platform components and the players who will be involved to optimize the analysis of sequences and the speed with which samples are transmitted. To do so, we use mathematical healthcare models to find the best possible way to organize the process, from a patient’s consultation with an oncologist to the result. This assessment is not only economic in nature. We also aim to quantitatively asses the platform’s benefits for patients. The assessment tools will be designed to be reproduced and used in other gene sequencing platforms initiatives.

 

What research are you drawing on to assess the organization of the Auragen platform?

VA: We are drawing on the e-SIS project we took part in, in which we evaluated the impact of information and communication technologies in oncology. This project was part of a research program to study the performance of the Ministry of Health’s healthcare system. We proposed methods for modeling different processes, computerized and non-computerized, in order to compare the effectiveness of both systems. This allowed us to quantitatively evaluate the benefits of computer systems in oncologists’ offices.

 

What challenges do you face in assessing a sequencing platform?

VA: The first challenge is to try to size and model a form of care which doesn’t exist yet. We’ll need to have discussions with oncologists and genomics researchers to determine at what point in the treatment pathway sequencing technologies should be integrated. Then comes the question of the assessment itself. We have a general idea about the cost of sequencing devices and operations, but these methods will also lead to new treatment approaches whose costs will have to be calculated. And finally, we’ll need to think about how to optimize everything surrounding the sequencing itself. The different computational biology activities for analyzing data and the transmission channels for the samples must not be slowed down.

 

What is the timeline for the Auragen project?

VA: Our team will be involved in the first three years of the project in order to carry out our assessment. The project will last a total of 60 months. At the end of this period, we should have a working platform which is open to everyone and whose value has been determined and quantified. But before that, the first deadline is in 2019, at which time we must already be able to maintain a pace of 18,000 samples sequenced per year.

 

 

Mobile World Congress 2016, market

Will 5G turn the telecommunications market upside-down?

The European Commission is anticipating the arrival of the fifth generation in mobile phones (5G) in 2020. It is expected to significantly increase data speeds and offer additional uses. However, the extent of the repercussions on the telecommunications market and on services is still difficult to evaluate, even for the experts. Some believe that 5G will be no more than a technological step up from 4G, in the same way that 4G progressed from 3G. In which case, it should not create radical change in the positions of the current economic stakeholders. Others believe that 5G has the potential to cause a complete reshuffle, stimulating the creation of new industries which will disrupt the organization among longstanding operators. Marc Bourreau sheds light on these two possibilities. He is an economist at Télécom ParisTech, and in March, co-authored a report for the Centre on Regulation in Europe (Cerre) entitled “Towards the successful deployment of 5G in Europe: What are the necessary policy and regulatory
conditions?”.

 

Can we predict what 5G will really be like?

Marc Bourreau: 5G is a shifting future. It is a broad term which encompasses the current technical developments in mobile technologies. The first of these will not reach commercial maturity until 2020, and will continue to develop afterwards, similar to the way in which 4G is still developing today. At present, 5G could go in a number of directions. But we can already imagine the likely scenarios from the positioning of economic actors and regulators.

Is seeing 5G as a simple progression from 4G one of those scenarios?  

MB: 5G partly involves improving 4G, using new frequency bands, increasing antenna density, and improving the efficiency of wireless technology to allow greater data speeds. One way of seeing 5G is indeed as an improved 4G. However, this is probably the smallest progression that can be envisaged. Under this hypothesis, the structure of the market would be fairly similar, with mobile operators keeping an economic model based on sales of 5G subscriptions.

Doesn’t this scenario worry the economic stakeholders?

MB: Not really. In this case, the regulations would not change a great deal, which would mean there would be no need for a major adaptation by the longstanding stakeholders. There may be questions over investment for the operators, for example in antennae for which the density is set to rise. They would have to find a way of financing the new infrastructure. There would perhaps also be questions surrounding installation. A large density of 5G antennae would mean that development would primarily take place in urban areas, where installing antennae poses fewer problems.

Which scenario could change the way the current mobile telecommunications market is structured?

MB: Contrary to the scenario of a simple progression, is that of a total revolution. In this case, 5G would provide network solutions for particular industries. Economically speaking, we use the term of industry “verticals”. Connected cars are a vertical, as is health, and connected objects. These sectors could develop new services with 5G. It would be a true revolution, as these verticals require access to the network and infrastructure. If a carmaker creates an autonomous vehicle, it must be able to receive and send data on a dedicated network. This means that antennae and bandwidths will need to be shared with mobile phone operators.

To what extent would this “revolution” scenario affect the market?

MB: Newcomers will not have their own infrastructure. They will therefore be virtual operators, as opposed to classical operators. They will probably have to rent access. This means the longstanding operators will have to change their economic model to incorporate this new role. In this scenario, the current operators would become network actors rather than service actors. Sharing the network like this could imply regulations to help the different actors to negotiate with each other. As each virtual operator will have different needs, the quality of service will not be identical for each vertical. The question of preserving or adapting the neutrality of the net for 5G will inevitably arise.

Isn’t the scenario of a revolution, along with new services, more advantageous?

MB: It certainly promises to use the full potential of technology to achieve many things. But it also involves risk. It could disrupt operators’ economic models, and who knows if they will be able to adapt? Will the longstanding operators be capable of investing in infrastructure which will then be open to all? An optimistic view would be to say that by opening the networks, the many services created will generate value which will, in part, come back to the operators, allowing them to finance the development of the network. But we should not forget the slightly more pessimistic view that the value might come back to the newcomers only. If this were to happen, the longstanding operators would no longer be able to invest, infrastructure would not be rolled out on a large scale, and the scenario of a revolution would not be possible.

Of the two scenarios, a “progression” or a “revolution”, is one more likely than the other?

MB: In reality, we have to see the two scenarios as an evolution in time, rather than a choice. Once 5G is launched in 2020, there will be a development margin. Technology will progress from the umbrella term ”5G”, which will bring together the pieces of basic technology. After all, each mobile generation brings about changes which consumers do not necessarily notice. When the technology is launched commercially, it will probably be more of a progression from 4G. The question is, will it then develop into the more ambitious scenario of a revolution?

What will influence the deepening role of 5G?

MB: The choice of scenario now depends on choices of normalization. Dictating the state of technology can either facilitate or place a limit on transformations in the market. Normalization is carried out in large economic areas. There are hopes of partnerships, for example between Europe and Korea, to unify standards and produce a homogeneous 5G. But we must not forget that the different economic areas can also have their own interest in sticking with a progression or opting for a revolution.

How do the interests of each economic area come into play?

MB: This technology is interesting both from an industrial point of view and a social one. Choices may be made on each of these aspects, depending on the policy preferred by an economic area. From an industrial point of view, a conservative approach to protect current actors will favor normalization. Conversely, other choices may be made, allowing new actors to emerge, which would be more of a “revolution” scenario. From a social point of view, we need to look at what the consumer wants, whether the new services created risk disrupting those currently on offer, etc.

What roles do the various stakeholders play in the decision-making process? 

MB: The choice may be decentralized to the stakeholders. Operators are in discussion and negotiation with the vertical stakeholders. I think it is worth letting this process play out, allowing it to generate experimentation. The situation is similar to the early days of mobile web, where no one knew what the right application was, or the right economic model, etc. For 5G, no one knows what the relationship between mobile operators and carmakers, for example, might be. They must be left to find their own common ground. Behind this, the role of public policy is to support experimentation, respond to market errors, but only if these do occur. The European Commission is there to coordinate the stakeholders, support them in their transformation and in their experimentation. The H2020 program is a typical example of this, a research project bringing together scientists and industrial actors to come up with solutions.

This article is part of our dossier 5G: the new generation of mobile is already a reality

algorithms

Ethics, an overlooked aspect of algorithms?

We now encounter algorithms at every moment of the day. But this exposure can be dangerous. It has been proven to influence our political opinions, moods and choices. Far from being neutral, algorithms carry their developers’ value judgments, which are imposed on us without our noticing most of the time. It is now necessary to raise questions about the ethical aspects of algorithms and find solutions for the biases they impose on their users.

 

[dropcap]W[/dropcap]hat exactly does Facebook do? Or Twitter? More generally, what do social media sites do? The overly-simplified but accurate answer is: they select the information which will be displayed on your wall in order to make you spend as much time as possible on the site. Behind this time-consuming “news feed” hides a selection of content, advertising or otherwise, optimized for each user through a great reliance on algorithms. Social networks use these algorithms to determine what will interest you the most. Without questioning the usefulness of these sites — this is most likely how you were directed to this article — the way in which they function does raise some serious ethical questions. To start with, are all users aware of algorithms’ influence on their perception of current events and on their opinions? And to take a step further, what impacts do algorithms have on our lives and decisions?

For Christine Balagué, a researcher at Télécom École de Management and member of CERNA (see text box at the end of the article), “personal data capturing is a well-known topic, but there is less awareness about the processing of this data by algorithms.” Although users are now more careful about what they share on social media, they have not necessarily considered how the service they use actually works. And this lack of awareness is not limited to Facebook or Twitter. Algorithms now permeate our lives and are used in all of the mobile applications and web services we use. All day long, from morning to night, we are confronted with choices, suggestions and information processed by algorithms: Netflix, Citymapper, Waze, Google, Uber, TripAdvisor, AirBnb, etc.

Are your trips determined by Citymapper? Or by Waze? Our mobility is increasingly dependent on algorithms. Illustration: Diane Rottner for IMTech

 

They control our lives,” says Christine Balagué. “A growing number of articles published by researchers in various fields have underscored the power algorithms have over individuals.” In 2015, Robert Epstein, a researcher at the American Institute for Behavioral Research, demonstrated how a search engine could influence election results. His study, carried out with over 4,000 individuals, demonstrated that candidates’ rankings in search results influenced at least 20 % of undecided voters. In another striking example, a study carried out by Facebook in 2012 on 700,000 of its users showed that people who had previously been exposed to negative posts posted predominantly negative content. Meanwhile, those who had previously been exposed to positive posts posted essentially positive content. This proves that algorithms are likely to manipulate individuals’ emotions without their realizing or being informed of it. What role do our personal preferences play in a system of algorithms of which we are not even aware?

 

The opaque side of algorithms

One of the main ethical problems with algorithms stems from this lack of transparency. Two users who carry out the same query on a search engine such as Google will not have the same results. The explanation provided by the service is that responses are personalized to best meet the needs of each of these individuals. But the mechanisms for selecting results are opaque. Among the parameters taken into account to determine which sites will be displayed on the page, over a hundred have to do with the user performing the query. Under the guise of trade secret, the exact nature of these personal parameters and how they are taken into account by Google’s algorithms is unknown. It is therefore difficult to know how the company categorizes us, determines our areas of interest and predicts our behavior. And once this categorization has been carried out, is it even possible to escape it? How can we maintain control over the perception that the algorithm has created about us?

This lack of transparency prevents us from understanding possible biases which could result from data processing. Nevertheless, these biases do exist and protecting ourselves from them is a major issue for society. A study by Grazia Cecere, an economist at Télécom École de Management, provides an example of how individuals are not treated equally by algorithms. Her work has highlighted discrimination between men and women in a major social network’s algorithms for associating interests. “In creating an ad for STEMs (sciences, technology, education, mathematics), we noticed that the software demonstrated a preference for distributing it to men, even though women show more interest for this subject,” explains Grazia Cecere. Far from the myth of malicious artificial intelligence, this sort of bias is rooted in human actions. We must not forget that behind each line of code, there is a developer.

Algorithms are used first and foremost to propose services, which are most often commercial in nature. They are thus part of a company’s strategy and reflect this strategy in order to respond to its economic demands. “Data scientists working on a project seek to optimize their algorithms without necessarily thinking about the ethical issues involved in the choices made by these programs,” points out Christine Balagué. In addition, humans have perceptions about the society to which they belong and integrate these perceptions, either consciously or unconsciously, in the software they develop. Indeed, value judgements present in algorithms quite often reflect the value judgments of their creators. In the example of Grazia Cecere’s work, this provides a simple explanation for the bias discovered, “An algorithm learns what it is asked to learn and replicates stereotypes if they are not removed.”

algorithms

What biases are hiding in the digital tools we use every day? What value judgments passed down from algorithm developers do we encounter on a daily basis? Illustration: Diane Rottner for IMTech.

 

A perfect example of this phenomenon involves medical imaging. An algorithm used to classify a cell as sick or healthy must be configured to make a comparative assessment of the number of false positives and false negatives. Developers must therefore decide to what extent it is tolerable for healthy individuals to receive positive tests in order to prevent sick individuals from receiving negative tests. For doctors, it is preferable to have false positives rather than false negatives while scientists who develop algorithms prefer false negatives to false positives, as scientific knowledge is cumulative. Depending on their own values, developers will privilege one of these professions.

 

Transparency? Of course, but that’s not all!

One proposal for combating these biases is to make algorithms more transparent. Since October 2016, the law for a digital republic, proposed by Axelle Lemaire, the former Secretary of State for Digital Affairs, requires transparency for all public algorithms. This law was responsible for making the higher education admission website (APB) code available to the public. Companies are also increasing their efforts for transparency. As of May 17, 2017, Twitter has allowed its users to see the areas of interest the site associates with them. But despite these good intentions, the level of transparency is far from sufficient for ensuring the ethical dimension. First of all, code understandability is often overlooked: algorithms are sometimes delivered in formats which make them difficult to read and understand, even for professionals. Furthermore, transparency can be artificial. In the case of Twitter, “no information is provided about how user interests are attributed,” observes Christine Balagué.

[Interests from Twitter
These are some of the interests matched to you based on your profile and activity.
You can adjust them if something doesn’t look right.]

Which of this user’s posts led to his being classified under “Action and Adventure,” a very broad category? How are “Scientific news” and “Business and finance” weighed in order to display content in the user’s Twitter feed?

 

To take a step further, the degree to which algorithms are transparent must be assessed. This is the aim of the TransAlgo project, another initiative launched by Axelle Lemaire and run by Inria. “It’s a platform for measuring transparency by looking at what data is used, what data is produced and how open the code is,” explains Christine Balagué, a member of TransAlgo’s scientific council. The platform is the first of its kind in Europe, making France a leading nation in transparency issues. Similarly, DataIA, a convergence institute for data science established on Plateau de Saclay for a period of ten years, is a one-of-a-kind interdisciplinary project involving research on algorithms in artificial intelligence, their transparency and ethical issues.

The project brings together multidisciplinary scientific teams in order to study the mechanisms used to develop algorithms. The humanities can contribute significantly to the analysis of the values and decisions hiding behind the development of codes. “It is now increasingly necessary to deconstruct the methods used to create algorithms, carry out reverse engineering, measure the potential biases and discriminations and make them more transparent,” explains Christine Balagué. “On a broader level, ethnographic research must be carried out on the developers by delving deeper into their intentions and studying the socio-technological aspects of developing algorithms.” As our lives increasingly revolve around digital services, it is crucial to identify the risks they pose for users.

Further reading Artificial Intelligence: the complex question of ethics

[box type=”info” align=”” class=”” width=””]

A public commission dedicated to digital ethics

Since 2009, the Allistene association (Alliance of digital sciences and technologies) has brought together France’s leading players in digital technology research and innovation. In 2012, this alliance decided to create a commission to study ethics in digital sciences and technologies: CERNA. On the basis of multidisciplinary studies combining expertise and contributions from all digital players, both nationally and worldwide, CERNA raises questions about the ethical aspects of digital technology. In studying such wide-ranging topics as the environment, healthcare, robotics and nanotechnologies, it strives to increase technology developers’ awareness and understanding of ethical issues.[/box]

 

 

 

ethics, social networks, éthique, Antonio Cailli, Télécom ParisTech

Rethinking ethics in social networks research

Antonio A. CasilliTélécom ParisTech – Institut Mines-Télécom, University of Paris-Saclay and Paola TubaroCentre national de la recherche scientifique (CNRS)

[dropcap]R[/dropcap]esearch into social media is booming, fueled by increasingly powerful computational and visualization tools. However, it also raises some ethical and deontological issues that tend to escape the existing regulatory framework. The economic implications of large scale data platforms, the active participation of members of networks, the spectrum of mass surveillance, the effect on health, the role of artificial intelligence: a wealth of questions all needing answers. A workshop running from December 5-6, 2017 at Paris-Saclay, organized in collaboration with three international research groups, hopes to make progress in this area.

 

Social Networks, what are we talking about?

The expression “social network” has become commonly used, but those that use it to refer to social media such as Facebook or Instagram are often ignorant about its origin and true meaning. Studies into social networks began long before the dawn of the digital age. Since the 1930s, sociologists have been conducting studies that attempt to explain the structure of the relationships that connect individuals and groups: their “networks”. This could be, for example, relationships based on advice between employees of a business, or friendships between pupils in a school. These networks can be represented as points (the pupils) connected by lines (the relationships).

A graphic representation of a social network (friendships between pupils at a school), created by J.L. Moreno in 1934. Circles = girls, triangles = boys, arrows = friendships. J.L. Moreno, 1934, CC BY

 

Well before any studies into the social aspects of Facebook and Twitter, this research shed significant light on the topic. For example, the role of spouses in a marriage; the importance of “weak connections” in job hunting; the “informal” organization of a business; the diffusion of innovation; the education of political and social elites; and mutual assistance and social support when faced with ageing or illness. The designers of digital platforms such as Facebook now adopt some of the analytical principles that this research was based on, founded on mathematical graph theory (although they often pay less attention to the associated social issues).

Researchers in this field understood very quickly that the classic principles of research ethics (especially the informed consent of participants in a study and the anonymization of any data relating to them) were not easy to guarantee. In social network research, the focus is never on one sole individual, but rather on the links between the participant and other people. If the other people are not involved in the study, it is hard to see how their consent can be obtained. Also, the results may be hard to anonymize, as visuals can often be revealing, even when there is no associated personal identification.

 

Digital ethics: a minefield

Academics have been pondering these ethical problems for a quite some time: in 2005, the journel Social Networks dedicated an issue to these questions. The dilemmas faced by researchers are exacerbated today by the increased availability of relational data which has been collected and used by digital giants such as Facebook and Google. New problems arise as soon as the lines between “public” and “private” spheres become blurred. To what extent do we need consent to access the messages that a person sends to their contacts, their “retweets” or their “likes” on friends’ walls?

Information sources are often the property of commercial companies, and the algorithms these companies use tend to offer a biased perspective on the observations. For example, can a contact made by a user through their own initiative be interpreted in the same way as a contact made on the advice of an automated recommendation system? In short, data doesn’t speak for itself, and we must question the conditions of its use and the ways it is created before thinking about processing it. These dimensions are heavily influenced by economic and technical choices as well as by the software architecture imposed by platforms.

But is negotiation between researchers (especially in the public sector) and platforms (which sometimes stem from major multinational companies) really possible? Does access to proprietary data risk being restricted or unequally distributed (potentially at a disadvantage to public research, especially when it doesn’t correspond to the objectives and priorities of investors)?

Other problems emerge when we consider that researchers may even resort to paid crowdsourcing for data production, using platforms such as Amazon Mechanical Turk to ask the masses to respond to a questionnaire, or even to upload their online contact lists. However, these services raise questions about old beliefs in terms of working conditions and appropriation of a product. The ensuing uncertainty hinders research which could potentially have positive impacts on knowledge and society in a general sense.

The potential for misappropriation of research results for political or economic ends is multiplied by the availability of online communication and publication tools, which are now used by many researchers. Although the interest among the military and police in social network analysis is already well known (Osama Bin Laden was located and neutralized following the application of social network analysis principles), these appropriations are becoming even more common today, and are less easy for researchers to control. There is an undeniable risk that lies in the use of these principles to restrict civil and democratic movements.

A simulation of the structure of an Al-Qaeda network, “Social Network Analysis for Startups” (fig. 1.7), 2011. Reproduced here with permission from the authors. Kouznetsov A., Tsvetovat M., CC BY

 

Celebrating researchers

To break this stalemate, the solution is not to increase the number of restrictions which would just aggravate the constraints that are already inhibiting research. On the contrary, we must create an environment of trust, so that researchers can explore the scope and importance of social networks online and offline, as they are essential in making the most of prominent economic and social phenomena, whilst still respecting people’s rights.

The active role of researchers must be highlighted. Rather than remaining subject to predefined rules, they need to participate in the co-creation of an adequate ethical and deontological framework, drawing on their experience and reflections. This bottom-up approach integrates the contributions of not just academics but also the public, civil society associations and representatives from public and private research bodies. These ideas and reflections could then be brought forward to those responsible for establishing regulations (such as ethics committees)

 

An international workshop in Paris

Poster for the RECSNA17 Conference

Such was the focus of the workshop Recent ethical challenges in social-network analysis. The event was organized in collaboration with international teams (The Social Network Analysis Group from the British Sociological Association, BSA-SNAG; Réseau thématique n. 26 “Social Networks” from the French Sociological Association; and the European Network for Digital Labor Studies (ENDLS)), with support from Maison des Sciences de l’Homme de Paris-Saclay and Institut d’études avancées de Paris. The conference will be held on December 5-6. For more information and to sign up, please consult the event website. Antonio A. Casilli, Associate Professor at Télécom ParisTech and research fellow at Centre Edgar Morin (EHESS), Télécom ParisTech – Institut Mines-Télécom, University of Paris-Saclay and Paola Tubaro, Head of Research at LRI, a Computing Research Laboratory at CNRS. Teacher at ENS, Centre national de la recherche scientifique (CNRS).

 

The original version of this article was published on The Conversation France.

 

Brennus Analytics

Brennus Analytics: finding the right price

Brennus Analytics offers software solutions, making artificial intelligence available to businesses. Algorithms allow them to determine the optimal sales price, helping bring businesses closer to their objective of gaining market share and margin whilst also satisfying their customers. Currently incubated at ParisTech Entrepreneurs, Brennus Analytics also allows businesses to make well-informed decisions about their number one profitability lever: pricing.

 

Setting the price of a product can be a real headache for businesses. It is however a crucial stage which can determine the success or failure of an entire commercial strategy. If the price is too high, customers won’t buy the product. Too low, and the obtained margin is too weak to guarantee sufficient revenues. In order to help businesses find the right price, the start-up Brennus Analytics, incubated at ParisTech Entrepreneurs, proposes a software making artificial intelligence technology accessible for businesses. Founded in October 2015, the start-up is based on its founders’ own experiences in the field, in their roles as former researchers at the Insitut de Recherche en Informatique in Toulouse (IRIT).

The start-up is simplifying a task which can prove arduous and time-consuming for businesses. Hundreds, or indeed, thousands of factors have to be considered when setting prices. What is the customer willing to pay for the product? At what point in the year is there the greatest demand? Would a price drop have to be compensated for by an increase in volume? These are just a few simple examples showing the complexity of the problem to be resolved, not forgetting that each business will also have its own individual set of rules and restrictions concerning prices. “A price should be set depending on the product or service, the customer, and the context in which the transaction or contractual negotiation will take place”, emphasizes Emilie Gariel, the Marketing Director at Brennus Analytics.

 

Brennus Analytics

 

In order to achieve this, the team at Brennus Analytics relies on their solid knowledge regarding the task of pricing, combining it with data science and artificial intelligence technology. The technology they choose to implement depends on the problem they are trying to solve. For statistics, machine learning, deep learning and similar technologies are used. For more complex cases, Brennus employs an exclusive technology, called an “Adaptive Multi-Agent System” (AMAS). This works by representing each factor which needs to be considered by an agent. The optimal price is then obtained through an exchange of information between these agents, taking into consideration the objectives set by the business. “Our solution doesn’t try to replace human input, it simply provides valuable assistance in decision-making. This is also why we favor transparent artificial intelligence systems; it is crucial that the client understands the suggested price”, affirms Emilie Gariel.

The data used to run these algorithms comes from the businesses themselves. The majority have a transaction history and a large quantity of sales data available. These databases can potentially be supplemented by open-source data. However, the marketing director at Brennus Analytics warns: “We are not a data provider. However, there are several start-ups that are developing in the field of data mixing who can assist our clients if they are looking, for example, to raise the price of competition products.” She is careful to add: “Always wanting more data doesn’t really make much sense. It’s better to find a middle-ground between gathering internal data which is sometimes limited, and joining the race to accumulate information.”

In order to illustrate Brennus’ proposed solution, Emilie Gariel gives the example of a key player in the distribution of office supplies. “This client was facing to intense pressure from its competition, and they felt they had not always positioned themselves well in terms of pricing”, she explains. Its prices were set on the basis of a margin objective by product category. This outlook was too generic, disconnected from the client, which led to prices which were too high for popular products in this competitive market, and then prices which were too low for products where client price sensitivity was less strong. “The software allowed an optimization of prices which had a strong impact on the margin, by integrating a dynamic segmentation of products and a flexibility in pricing”, she concludes.

The capacity to clarify and subsequently resolve complex problems is likely Brennus’ greatest strength. “Without an intelligent tool like ours, businesses are forced to simplify the problem excessively. They consider fewer factors, simply basing prices on segments and other limited contexts. Their pricing is often therefore sub-optimal. Artificial intelligence, on the other hand, is able to work with thousands of parameters at the same time”, explains Emilie Gariel. The solution offers businesses several possibilities of how to increase their profitability by working on the different components of pricing (costs, reductions, promotions, etc.). In this way, she perfectly illustrates the potential of artificial intelligence to improve decision processes and profitability in businesses.

 

supply chain management, Matthieu Lauras

What is supply chain management?

Behind each part of your car, your phone or even the tomato on your plate, there’s an extensive network of contributors. Every day, billions of products circulate. The management of a logistics chain – or ‘supply chain management’ – organizes these movements on a smaller or larger scale. Matthieu Lauras, a researcher in industrial engineering at IMT Mines Albi, explains what it’s all about and the problems associated with supply chain management as well as their solutions.

 

What is a supply chain?

Matthieu Lauras: A supply chain consists of a network of installations (i.e. factories, shops, warehouses, etc.) and partners ranging from supplier-to-supplier chains, to client-to-client chains. It’s the succession of all these participants that provides added value and allows a finished consumer product or service to be created, and transported to the end of the production line.

For the management of supply chains, we focus on the flux of material and information. The idea is to optimize the overall performance of the network: to be capable of delivering the right product to the right place at the right time with the right standard of quality and cost. I often say to my students that supply chain management is the science of compromise. You have to find a good balance between several restrictions and issues. This is what allows you to have a sustainable level of competition.

 

What difficulties are produced by the supply chain?

ML: The biggest difficulty with supply chains occurs when they are not managed in a centralized way. In the context of a business for example, the CEO is able to be a mediator between two services if there is a problem. However, when dealing with the scale of a supply chain, there are several businesses which have different legal stances, and no one person is able to be the mediator. This means that participants have to get along, collaborate and coordinate.

This isn’t easy to do since one of the characteristics of a supply chain is the absence of total coherence between the local and global optimum. For example, I optimize my production by selling my product in 6-packs, to make things quicker, even though this isn’t necessarily what my customers want to ensure the product’s sale. They may prefer that the product is sold in packs of 10 rather than 6. Therefore, what I gain in producing 6-packs is then lost by the next participant who has to transform my product. This is just one example of the type of problem we try to tackle through research into supply chain management.

 

What does supply chain management research consist in?

ML: Research in this field spans over several levels. There is a lot of information available, the question is how to exploit it. We offer tools which can process this data in order for it to be passed on to people (i.e. production/logistics managers, operations directors, request managers, distribution/transport directors, etc.) that would be in the position to make decisions and lead actions.

An important element is that of uncertainty and variability. The majority of tools used in the supply chain were designed in the 60’s or 70’s. The problem is that they were invented at a time where the economy was relatively stable. A business knew that it would sell a certain volume of a certain product over the 5 years to come. Today, we don’t really know what we’re going to sell in a year. Furthermore, we have no idea about the variations in demand that we will have to deal with, nor the new technological opportunities that may arise in the next six months. We are therefore obliged to question what developments we can bring to the decision-making tools that are currently in use, in order to make them more adapted to this new environment.

In practice, research is based on three main stages: first, we design the mathematical models and the algorithms allowing us to find an optimal solution to a problem or to compare several potential solutions. Then we develop computing systems which are able to implement these. Finally, we conduct experiments with real data sets to assess the impact of innovations and suggested tools (the advantages and disadvantages).

Some tools in supply chain management are methodological, but the majority are computer-based. They generally consist in software such as business management software packages (software containing several universal tools) which can be used on a network scale, or alternatively, APS (‘Advanced Planning and Scheduling Systems’). Four elements are then developed by the intermediary of these tools: planning, collaboration, risk management and delay reduction. Amongst other things, these allow simulations of various scenarios to be carried out in order to optimize the performance of the supply chain.

 

What problems are these tools responding to?

ML: Let’s consider planning tools. In the supply chain for paracetamol, we’re talking about a product which needs to have immediate availability. However, it takes around 9 months between the moment when the first component is supplied and when the product is actually manufactured. This means we have to anticipate potential demand several months in advance. Depending on this, it is possible to predict the supplies of materials necessary for the product to be manufactured, but also the positioning of stock closer to or further from the client.

In terms of collaboration, the objective is to avoid conflicts that could paralyze the chain. This means that the tools facilitate the exchange of information and joint decision-making. Take the example of Carrefour and Danone. The latter sets up a TV advertising campaign for its new yogurt range. If this process isn’t coordinated with the supermarket, making sure that the products are in the shops and that  there is sufficient space to feature them, Danone risks spending lots of money on an advertising campaign without being able to meet the demand it creates.

Another range of tools deals with delay reduction. A supply chain has a strong momentum. The time it takes for a piece of information linked to a change at the end of the chain (a higher demand that expected for example) will have an impact on all participants for anything from a few weeks to several months. It’s a “whiplash effect”. In order to limit this, it is in everyone’s best interest to have smaller chains that are more reactive to changes. Research is therefore looking to reduce waiting times, information transmission time and even transport time between two points.

Finally, today we cannot know exactly what the demand will be in 6 months. This is why we are working on the issue of risk sharing, or “contingency plans” which allow us to limit the negative impact of risks. This can be implemented by calling upon several suppliers for any given component. If I then have a problem with one of these (i.e. a factory fire, liquidation, etc.), I retain my ability to function.

 

Are supply chain management techniques applied to any fields other than that of commercial chains?

ML: Supply chain management is now open to other applications, particularly in the service world, in hospitals and even in banks. The principal aim is to provide a product or service to a client. In the case of a patient waiting for an operation, there is a need for resources once they enter the operating theater. All the necessary staff need to be available, from the stretcher bearer that carries the patient, to the surgeon that operates on them. It’s therefore a question of synchronization of resources and logistics.

Of course there are also restrictions specific to this kind of environment. For example, for humanitarian logistics, the question of customers does not present in the same way as in commercial logistics. Indeed, the person benefitting from a service in a humanitarian supply chain is not the person who pays, as they would be in a commercial domain. However, there is still the need to manage the flow of resources in order to maximize the produced added value.