music, Marc Bourreau, Télécom ParisTech

Digital transition: the music industry reviews its progress

The music industry – the sector hit hardest by digitization – now seems to have completed the transformation that was initiated by digital technology. With the rise of streaming music, there has been a shift in the balance of power. Producers now look for revenue from sources other than record sales, and algorithms constitute the differentiating factor in the wide range of offers available to consumers.


Spotify, Deezer, Tidal, Google Music and Apple Music… Streaming is now the new norm in music consumption. Albums have never been so digitized; to an extent that raises the question: has the music industry reached the end of its digital transformation? After innumerable attempts to launch new economic models in the 2000s, such as purchases of individual tracks through iTunes, and voluntary contribution as for the Radiohead album In Rainbows, etc., streaming music directly online seems to have emerged as a lasting solution.

Marc Bourreau is an economist at Télécom ParisTech and runs the Innovation and Regulation Chair,[1] which is organizing a conference on September 28 on the topic: “Music – the end of the digital revolution?”. In his opinion, despite many artists’ complaints about the low level of royalties they receive from streaming plays, this is a durable model. Based on the principle of proportional payment for plays — Spotify pays 70% of its revenue to rights holders — streaming is now widely accepted by producers and consumers.

Nevertheless, the researcher sees areas of further development for the model. “The main problem is that the monthly subscriptions to these services represent an average annual investment that exceeds consumer spending before the streaming era,” explains Marc Bourreau. With initial subscriptions of around €10 per month, streaming platforms represent annual spending of €120 for subscribers – twice the average amount consumers used to spend annually on record purchases.

Because sites like Deezer use a freemium model, in which those who do not subscribe have access to the service in exchange for being exposed to ads, this observation enabled the researcher to confirm that an average music consumer will not subscribe to the premium offers proposed by these platforms. The investment is indeed too high for these consumers. To win over this target group, which constitutes a major economic opportunity, “one of the future challenges for streaming websites will be to offer lower rates,” Marc Bourreau explains.


Streaming platforms: where is the value?

However, rights holders may not agree with this strategy. “I think the platforms would like to lower their prices, but record companies also require a certain threshold, since they are dependent on the sales revenue generated by subscriptions,” explains Marc Bourreau. This decision prevents any type of price competition from taking place. The platforms must therefore find new differentiating features. But where?

In their offerings, perhaps? The researcher disagrees: “The catalog proposed by streaming sites is almost identical, except for a few details. This is not really a differentiating feature.” Or what about the sound quality, then? This seems possible considering the analogy with the streaming video industry, for which Netflix charges a higher fee for a higher quality image. But in reality, users do not pay much attention to the sound quality. “In economics, we call this a revealed preference: we discover what people prefer by watching what they do,” explains Marc Bourreau. But apart from a few purists, few users pay much attention to this aspect.

In fact, we must use algorithms to ascertain the value. The primary differentiation is found in the services offered by the platforms: recommendations, user-friendly designs, loading times… “To a large extent, the challenge is to help customers who are lost in the abundance of songs,” the economist explains. These algorithms allow listeners to find their way around the vast range of options.

And there is strong competition in this area due to the acquisition of start-ups and recruitment of talent in this field… Nothing is left to chance. In 2016, Spotify has already acquired Cord Project, which develops a messaging application for audio sharing, Soundwave, which creates social interactions based on musical discoveries, and Crowdalbum, which allows for the collection and sharing of photos taken during concerts.

Research in signal processing is also of great interest to streaming platforms, for analyzing audio files, understanding what makes a song unique, and finding songs with the same profile to recommend to users.


New relationships between stakeholders in the music industry

One thing is clear – sales of physical albums cannot compete with the constantly expanding range of the digital offering. Album sales are in constant decline. Performers and producers have had to adapt and find new agreements. Although artists’ income has historically been based less on album sales and more on proceeds from concerts, this was not the case for labels, which have now come to rely on events, and even merchandising. “The change in consumption has led to the appearance of what are called ‘360 ° deals’, in which a record company keeps all the revenue from their clients’ activities, and pays them a percentage,” explains Marc Bourreau.


Robbie Williams was one of the first music stars to sign a 360° deal in 2001, handing the revenue from his numerous business segments over to his record company. Credits: Matthias Muehlbradt.


Will these changes result in even less job security for artists? “Aside from superstars, you must realize that performers have a hard time making a living from their work,” the economist observes. He bases this view on a study carried out in 2014 with Adami,[2] which shows that with an equivalent level of qualification, an artist earns less than an executive counterpart — showing that music is not a particularly lucrative industry for performers.

Nonetheless, digital technology has not necessarily made things worse for artists. According to Marc Bourreau, “certain online tools now enable amateurs and professionals to become self-produced, by using digital studios, and using social networks to find mixers…” YouTube, Facebook and Twitter offer good opportunities for self-promotion as well. “Fan collectives that operate using social media groups also play a major role in music distribution, especially for lesser-known artists,” he adds.

In 2014, 55% of artists owned or used a home studio for self-production. This number is growing, since it was only 45% in 2008. Therefore, the industry’s digital transition has not only changed the means of music consumption, but also the production methods. Although things seem to be stabilizing in this area a well, it is hard to say whether or not these major digital transformations in the industry are behind us. “It is always hard to predict the future in economics,” Marc Bourreau admits with a smile, “I could tell you something, and then realize ten years from now that I was completely wrong.”  No problem, let’s meet again in ten years for a new review!


[1] The Innovation and Regulation Chair includes Télécom ParisTech, Polytechnique and Orange. Its work is focused on studying intangible services and on the dynamics of innovation in the area of information and communication sciences.

[2] Adami: A civil society for the administration of artists’ and performers’ rights. It collects and distributes fees relating to intellectual property rights for artistic works.


[divider style=”solid” top=”5″ bottom=”5″]

Crowdfunding – more than just a financial tool

As a symbol of the new digital technology-based economy, crowdfunding enables artists to fund their musical projects using the help of citizens. Yet crowdfunding platforms also have another advantage: they allow for market research. In this way, Jordana Viotto da Cruz, a PhD student at Télécom ParisTech and Paris 13 University, under the joint supervision of Marc Bourreau and François Moreau, has observed in her on-going thesis work that project sponsors used these tools to obtain information about potential album sales. Based on an econometric study, she showed that for projects that failed to meet the threshold for funding, greater pledges for funding had a positive effect on the likelihood of these albums being marketed on platforms such as Amazon in the following months.

[divider style=”solid” top=”5″ bottom=”5″]

Mai Nguyen, Artificial Intelligence, Children's curiosity

Artificial Intelligence: learning driven by children’s curiosity

At Télécom Bretagne, Mai Nguyen is drawing on the way children learn in order to develop a new form of artificial intelligence. She is hoping to develop robots capable of adapting to their environment by imitating the curiosity humans have at the start of their lives.


During the first years of life, humans develop at an extremely fast pace. “Between zero and six years of age, a child learns to talk, walk, draw and communicate…” explains Mai Nguyen, a researcher at Télécom Bretagne. This scientist is trying to better understand this rapid development, and reproduce it using algorithms. From this position at the interface between cognitive science and programming, Mai Nguyen is seeking to give robots a new type of artificial intelligence.

During the earliest stages of development, children do not go to school, and their parents are not constantly sharing their knowledge with them. According to the researcher, while learning does occur sporadically in an instructive manner — through the vertical flow of information from educator to learner — it primarily takes place as children explore their environment, driven by their own curiosity. “The work of psychologists has shown that children themselves choose the activities through which they increase their capacities the most, often through games, by trying a wide variety of things.”


A mix of all learning techniques

This method of acquiring skills could be seen as progressing through trial and error. “Trial and error situations usually involve a technique that is adopted in order to acquire a specific skill requested by a third party,” she explains. However, Mai Nguyen points out that learning through curiosity goes beyond this situation, enabling the acquisition of several skills, with the child learning without specific instructions.

In cognitive sciences and robotics, the technical name for this curiosity is intrinsic motivation. Mai Nguyen utilizes this motivation to program robots capable of independently deciding how they should acquire a set of skills. Thanks to the researcher’s algorithms, “the robot chooses what it must learn, and decides how to do it,” she explains. It will therefore be able to identify—on its own— an appropriate human or machine contact from whom it can seek advice.

Likewise, it will decide on its own if it should acquire a skill through trial and error, or if it would be better to learn from a knowledgeable human. “Learning by intrinsic motivation is in fact the catalyst for existing methods,” explains Mai Nguyen.


Robots better adapted to their environment

There are several advantages to copying a child’s progress in early development and applying the same knowledge and skill acquisition mechanisms to robots. “The idea is to program an object that is constantly learning by adapting to its environment,” explains the researcher. This approach is a departure from the conventional approach in which the robot leaves the factory in a completed state, with defined skills that will remain unchanged for its entire existence.

In Mai Nguyen’s view, this second approach has many limitations. Particularly the variability of the environment: “The robot can learn, with supervision, to recognize a table and chairs, but in a real home, these objects are constantly being moved and deteriorating… How can we ensure it will be able to identify them without making mistakes?” However, learning through intrinsic motivation enables the robot do adapt to an unknown situation, and customize its knowledge based on the environment.

The variability is not only related to space; it can also include time. A human user’s demands on the robot are not the same as they were ten years ago. There is no reason to believe they will be the same ten years from now. An adaptive robot therefore has a longer lifespan vis-à-vis human societal changes than a pre-defined object.


Mai Nguyen, AI, children

Learning through curiosity allows Mai Nguyen and her colleagues to develop robots capable of learning tasks in a hierarchical fashion.

Data sana in robot sano

It seems difficult for supervised automatic learning to compete with artificial intelligence led by intrinsic motivation. Mai Nguyen reports on recent experiments involving the replacement of faulty robots developed with automatic learning: “When an initial robot ceased to be operational, we took all of its data and transferred it to an exact copy.” This resulted in a second robot that did not work well either.

This phenomenon can be explained by the concept of the incarnation of robots developed using automatic learning. Each body is linked to data acquired during the conditioning procedure. This is a problem that “curious” robots do not face, since they acquire data in an intelligent manner, by collecting data that is customized to their body and environment, while at the same time limiting the data’s volume and acquisition time.


When will curious robots become a part of our daily lives?

Artificial intelligence that adapts to environments to this extent is especially promising in its potential for home care services and improving the quality of life. Mai Nguyen and her colleagues at Télécom Bretagne are working on many of these applications. Such robots could become precious helpers for the elderly and for disabled individuals. Their development is still relatively recent from a scientific and technological point of view. Although the first psychological theories on intrinsic motivation date back to the 1960s, their transposition to robots has only begun to take place over the past fifteen years.

The scientific community, which has been working on the issue, has already obtained conclusive results. By causing robots with artificial intelligence to interact, scientists observed that they were able to develop common languages. While these languages were based on different vocabulary during each trial, it always enabled them to converge on situations in which the artificial intelligences could communicate together, after starting from scratch. It’s a little like imagining humans from different cultures and languages ending up together on a desert island.


[box type=”shadow” align=”” class=”” width=””]

Artificial intelligence at Institut Mines-Télécom

The 8th Fondation Télécom brochure, published in June 2016, is dedicated to artificial intelligence (AI). It presents an overview of the research underway in this area throughout the world, and presents the vast amount of existing research underway at Institut Mines-Télécom schools. This 27-page brochure defines intelligence (rational, naturalistic, systematic, emotional, kinesthetic…), looks back at the history of AI, questions the emerging potential, and looks at how it can be used by humans.



Vincent Gripon, IMT Atlantique, Information science, Artificial Intelligence, AI

When information science assists artificial intelligence

The brain, information science, and artificial intelligence: Vincent Gripon is focusing his research at Télécom Bretagne on these three areas. By developing models that explain how our cortex stores information, he intends to inspire new methods of unsupervised learning. On October 4, he will be presenting his research on the renewal of artificial intelligence at a conference organized by the French Academy of Sciences. Here, he gives us a brief overview of his theory and its potential applications.


Since many of the developments in artificial intelligence are based on progress made in statistical learning, what can be gained from studying memory and how information is stored in the brain?

Vincent Gripon: There are two approaches to artificial intelligence. The first approach is based on mathematics. It involves formalizing a problem from the perspective of probabilities and statistics, and describing the objects or behaviors to be understood as random variables. The second approach is bio-inspired. It involves copying the brain, based on the principle that it represents the only example of intelligence that can be used for inspiration. The brain is not only capable of making calculations, it is above all able to index, research, and compare information. There is no doubt that these abilities play a fundamental role in every cognitive task, including the most basic.


In light of your research, do you believe that the second approach has greater chances of producing results?

VG: After the Dartmouth conference of 1956 – the moment at which artificial intelligence emerged as a research field – the emphasis was placed on the mathematical approach. 60 years later, there are mixed results, and many problems that could be solved by a 10-year-old child cannot be solved using machines. This partly explains the period of stagnation experienced by the discipline in the 1990s. The revival we have seen over the past few years can be explained by the renewed interest in the artificial neural networks approach, particularly due to the rapid increase in available computing power. The pragmatic response is to favor these methods, in light of the outstanding performance achieved using neuro-inspired methods, which comes close to and sometimes even surpasses that of the human cortex.


How can information theory – usually more focused on error correcting codes in telecommunications than neurosciences – help you imitate an intelligent machine?

VG: When you think about it, the brain is capable of storing information, sometimes for several years, despite the constant difficulties it must face: the loss of neurons and connections, extremely noisy communication, etc. Digital systems face very similar problems, due to the miniaturization and multiplication of components. The information theory proposes a paradigm that addresses the problems related to information storage and transfer, which applies to any system, biological or otherwise. The concept of information is also indispensable for any form of intelligence, in addition to calculation, which very often receives greater attention.


What model do you base your work on?

VG: As an information theorist, I start from the premise that robust information is redundant information. By applying this principle to the brain, we see that information is stored by several neurons, or several micro-columns, which are clusters of neurons, on the distributed error correcting code model. One model that offers outstanding robustness is the clique model. A clique is made up of several — at least four — micro-columns, which are all interconnected. The advantage of this model is that even when one connection is lost, they can all still communicate. This is a distinctive redundancy property. Also, two micro-columns can be part of several cliques. Therefore, every connection supports several items of information, and every item of information is supported by several connections. This dual property ensures the mechanism’s robustness and its great diversity of storage.

“Robust information is redundant”

How has this theory been received by the community of brain specialists?

 VG: It is very difficult to build bridges between the medical field and the mathematical or computer science field. We do not have the same vocabulary and intentions. For example, it is not easy to get our neuroscientist colleagues to admit that information can be stored in the very structure of the connections of the neural network, and not in its mass. The best way of communicating remains biomedical imaging, in which the models are confronted with reality, which can facilitate their interpretation.


Do the observations made via imaging offer hope regarding the validity of your theories?

VG: We work specifically with the laboratory for signal and image processing (LTSI) of Université de Rennes I to validate our models. To bring them to light, the cerebral activation of subjects performing cognitive tasks is observed using electroencephalography. The goal is not to directly validate our theories, since the required spatial resolution is not currently attainable. The goal is rather to verify the macroscopic properties they predict. For example, one of the consequences of the neural clique model is that a positive correlation exists between the topological distance between the representation of objects in the neocortex and the semantic distance between the same objects. Specifically, the representations of two similar objects will share many of the same micro-columns. This has been confirmed through imaging. Of course, this does not validate the theory, but it can cause us to modify it or carry out new experiments to test it.


To what extent is your model used in artificial intelligence?

VG: We focus on problems related to machine learning — typically on object recognition. Our model offers good results, especially in the area of unsupervised learning, in other words, without a human expert helping the algorithm to learn by indicating what it must find. Since we focus on memory, we target different applications than those usually targeted in this field. For example, we look closely at an artificial system’s capacity to learn with few examples, to learn many different objects or learn in real-time, discovering new objects at different times. Therefore, our approach is complementary, rather than being in competition with other learning methods.


The latest developments in artificial intelligence are now focused on artificial neural networks — referred to as “deep learning”. How does your position complement this learning method?

VG: Today, deep neural networks achieve outstanding levels of performance, as highlighted in a growing number of scientific publications. However, their performance remains limited in certain contexts. For example, these networks require an enormous amount of data in order to be effective. Also, once a neural network is trained, it is very difficult to get the network to take new parameters into account. Thanks to our model, we can enable incremental learning to take place: if a new type of object appears, we can teach it to a network that has already been trained. In summary, our model is not as good for calculations and classification, but better for memorization. A clique network is therefore perfectly compatible with a deep neural network.


[box type=”shadow” align=”” class=”” width=””]

Artificial intelligence at the French Academy of Sciences

Because artificial intelligence now combines approaches from several different scientific fields, the French Academy of Sciences is organizing a conference on October 4 entitled “The Revival of Artificial Intelligence”. Through presentations by four researchers, this event will present the different facets of the discipline. In addition to the presentation by Vincent Gripon, which combines informational neuroscience, the conference will also feature presentations by Andrew Blake (Alan Turing Institute) on learning machines, Yann LeCun (Facebook, New York University) on deep learning, and Karlheinz Meier (Heidelberg University) on brain-derived computer architecture.

Practical information:
Conference on “The Revival of Artificial Intelligence”
October 4, 2016 at 2:00pm
Large meeting room at Institut de France
23, quai de Conti, 75006 Paris


Past, Sophie Bretesché, Mines Nantes, récit, traces, changement

From the vestiges of the past to the narrative: reclaiming time to remember together

The 21st Century is marked by a profound change in our relationship with time, now merely perceived in terms of acceleration, speed, changes and emergencies. At Mines Nantes, the sociologist Sophie Bretesché has positioned herself at the interfaces between the past and the future, where memory and oblivion color our view of the present. In contexts undergoing changes, such as regions and organizations, she examines the vestiges, remnants of the past, giving a different perspective on the dialectic of oblivion and memory. She analyzes the essential role of collective memory and shared narrative in preserving identity in situations of organizational change and technological transformation.


A modern society marked by fleeting time

Procrastination Day, Slowness Day, getting reacquainted with boredom… many attempts have been made to try to slow down time and stop it slipping through our fingers. They convey a relationship with time that has been shattered, from the simple rhythm of Nature to that marked by the clock of the industrial era, now a combination of acceleration, motion and real time. This transformation is indicative of how modern society functions, where “that which is moving has substituted past experience, the flexible is replacing the established, the transgressive is ousting the transmitted“, observes Sophie Bretesché.

The sociologist starts from this simple question: the loss of time. What dynamics are involved in this phenomenon corresponding to an acceleration and a compression of work, family and leisure time, these objects of time reflective of our social practices?

One reason frequently cited for this racing of time is the unavoidable presence of new technologies arriving synchronously into our lives, and the frenetic demand for increasingly high productivity. However, this explanation is lacking, confusing correlation and causality. The reality is that of a sum of constant technological and managerial changes which “prevents consolidation of knowledge and experience-building as a sum of knowledge” explains the researcher, who continues: “Surrounded, in the same unit of time, by components with distinct temporal rhythms and frameworks, the subject is both cut off from his/her past and dispossessed of any ability to conceive the future.

To understand what is emerging and observed implicitly in reality, an unprecedented relationship with time-history and with our memory, the sociologist has adopted the theory that it is not so much acceleration that is posing a problem, but the ability of a society to remember and to forget. “Placing the focus on memory and oblivion“, accepting that “the present remains inhabited by vestiges of our past“, and grasping that “the processes of change produce material vestiges of the past which challenge the constants of the past“, are thus part of a process to regain control of time.


Study in search of vestiges of the past

This fleeting time is observed clearly in organizations and regions undergoing changes“, notes Sophie Bretesché, and she took three of these fields of study as a starting point in her search for evidence. Starting from “that which grates, resists, arouses controversy, fields in which nobody can forget, or remember together“, she searches for vestiges which are material and tangible signs of an intersection between the past and the future. First of all, she meets executives faced with information flows which are impossible to regulate, and an organization in which the management structure has changed three times in ten years. The sociologist conducts interviews with the protagonists and provides a clearer understanding of the executives’ activities, demonstrating that professions have nonetheless continued to exist, following alternative paths. A third study conducted on residents in the vicinity of a former uranium mine leads her to meet witnesses of a bygone era. Waste formerly used by the local residents is now condemned for its inherent risks. These vestiges of the past are also those of a modern era where risk is the subject of harsh criticism.

In the three cases, the sociology study partakes in the stories of those involved, conducted over long periods. While a sociologist usually presents study results in the form of a cohesive narrative based on theories and interpretations, a social change expert does not piece together a story retrospectively, but analyzes movement and how humans in society construct their temporalities, with the sociological narrative becoming a structure for temporal mediation.

These different field studies demonstrate that it is necessary to “regain time for the sake of time“. This is a social issue, “to gain meaning, reclaim knowledge and give it meaning based on past experience.” Another result is emerging: behind the outwardly visible movements, repeated changes, we will find constants which tend to be forgotten, forms of organization. In addition, resistance to change, which is now stigmatized, could after all have positive virtues, as it is an expression of a deeply rooted culture, based on a collective identity that it would be a shame to deny ourselves.


A narrative built upon a rightful collective memory

This research led to Sophie Bretesché taking the helm at Mines Nantes of the “Emerging risks and technologies: from technological management to social regulation” Chair, set up in early 2016. Drawing on ten years of research between the social science and management department and the physics and chemistry laboratories at Mines Nantes, this chair focuses on forms of regulation of risk in the energy, environmental and digital sectors. The approach is an original one in that these questions are no longer considered from a scientific perspective alone, because it is a fact that these problems affect society.

The social acceptability of the atom in various regions demonstrated, for example, that the cultural relationship with risk cannot be standardized universally. While, in Western France, former uranium mines have been rehabilitated within lower-intensity industrial or agricultural management, they have been subject to moratoriums in the Limousin region, where their spaces are now closed-off. These lessons on the relationship with risk are compiled with a long-term view. In this instance, the initial real estate structures offer explanations bringing different stories to light which need to be pieced together in the form of narratives.

Indeed, in isolation, the vestiges of the past recorded during the studies do not yet form shared memories. They are merely individual perceptions, fragile due to their lack of transfer to the collective. “We remember because those around us help“, reminds the researcher, continuing: “the narrative is heralded as the search for the rightful memory“. In a future full of uncertainty, in “a liquid society diluted in permanent motion“, the necessary construction of collective narratives – and not storytelling – allows us to look to the future.

The researcher, who enjoys being at the interfaces of different worlds, takes delight in the moment when the vestiges of the past gradually make way for the narrative, where the threads of sometimes discordant stories start to become meaningful. The resulting embodied narrative is the counterpoint created from the tunes collected from the material vestiges of the past: “Accord is finally reached on a shared story“, in a way offering a new shared commodity.

With a longstanding interest in central narratives of the past, Sophie Bretesché expresses one wish: to convey and share these multiple experiences, times and tools for understanding, these histories of changes in progress, in a variety of forms such as the web documentary or the novel.


Sophie Bretesché, Mines NantesSophie Bretesché is a Research Professor of Sociology at Mines Nantes. Head of the regional chair in “Risks, emerging technologies and regulation“, she is co-director of the NEEDS (Nuclear, Energy, Environment, Waste, Society) program and coordinates the social science section of the CNRS “Uranium Mining Regions” Workshop. Her research encompasses time and technologies, memory and changes, professional identities and business pathways. An author of 50 submissions in her field, co-director of two publications, “Fragiles competences” and “Le nucléaire au prisme du temps”, and author of “Le changement au défi de la mémoire“, published by Presses des Mines, she is also involved at the Paris Institute of Political Studies in two Executive Master’s programs (Leading change and Leadership pathways).



Trubo codes, Claude Berrou, Quèsaco, IMT Atlantique

What are turbo codes?

Turbo codes form the basis of mobile communications in 3G and 4G networks. Invented in 1991 by Claude Berrou, and published in 1993 with Alain Glavieux and Punya Thitimajshima, they have now become a reference point in the field of information and communication technologies. As Télécom Bretagne, birthplace of these “error-correcting codes”, prepares to host the 9th international symposium on turbo codes, let’s take a closer look at how these codes work and the important role they play in our daily lives.


What do error-correcting codes do?

In order for communication to take place, three things are needed: a sender, a receiver, and a channel. The most common example is that of a person who speaks, sending a signal to someone who is listening, by means of the air conveying the vibrations and forming the sound wave. Yet problems can quickly arise in this communication if other people are talking nearby – making noise.

To compensate for this difficulty, the speaker may decide to yell the message. But the speaker could also avoid shouting, by adding a number after each letter in the message, corresponding to the letter’s place in the alphabet. The listener receiving the information will then have redundant information for each part of the message — in this case, double the information. If noise alters the way a letter is transmitted, the number can help to identify it.

And what role do turbo codes play in this?

In the digital communications sector, there are several error-correcting codes, with varying levels of complexity. Typically, repeating the same message several times in binary code is a relatively safe bet, yet it is extremely costly in terms of bandwidth and energy consumption.

Turbo codes are a much more developed way of integrating information redundancy. They are based on the transmission of the initial message in three copies. The first copy is the raw, non-encoded information. The second is modified by encoding each bit of information using an algorithm shared by the coder and decoder. Finally, another version of the message is also encoded, but after modification (specifically, a permutation). In this third case, it is no longer the original message that is encoded and then sent, but rather a transformed version. These three versions are then decoded and compared in order to find the original message.

Where are turbo codes used?

In addition to being used to encode all our data in 3G and 4G networks, turbo codes are also used in many different fields. NASA uses them for its communication with space probes which have been built since 2003. The space community, which has to contend with many constraints on communication processes, is particularly fond of these codes, as ESA also uses them for many of its probes. But more generally, turbo codes represent a safe and efficient encoding technique in most communication technologies.

Claude Berrou, inventor of turbo codes

How have turbo codes become so successful?

In 1948, American engineer and mathematician Claude Shannon proposed a theorem stating that codes always exist that are capable of minimizing channel-related transmission errors, up to a certain level of disturbance. In other words, Shannon asserted that, despite the noise in a channel, the transmitter will always be able to transmit an item of information to the receiver, almost error-free, when using efficient codes.

The turbo codes developed by Claude Berrou in 1991 meet these requirements, and are close to the theoretical limit for information transmitted with an error rate close to zero. Therefore, they represent highly efficient error-correcting codes. His experimental results, which validated Shannon’s theory, earned Claude Berrou the Marconi Prize in 2005 – the highest scientific distinction in the field of communication sciences. His research earned him a permanent membership position in the French Academy of Sciences.


[box type=”info” align=”” class=”” width=””]

Did you know?

The international alphabet (or the NATO phonetic alphabet) is an error-correcting code. Every letter is in fact encoded as word beginning with that letter. Thus ‘N’ and ‘M’ become ‘November’ and ‘Mike’. This technique prevents much confusion, particularly in radio communications, which often involve noise.[/box]


Artificial Intelligence: the complex question of ethics

The development of artificial intelligence raises many societal issues. How do we define ethics in this area? Armen Khatchatourov, a philosopher at Télécom École de Management and member of the IMT chair “Values and Policies of Personal Information”, observes and carefully analyzes the proposed answers to this question. One of his main concerns is the attempt to standardize ethics using legislative frameworks.

In the frantic race for artificial intelligence, driven by GAFA[1], with its increasingly efficient algorithms and ever-faster automated decisions, engineering is king, supporting this highly-prized form of innovation. So, does philosophy still have a role to play in this technological world that places progress at the heart of every objective? Perhaps that of a critical observer. Armen Khatchatourov, a researcher in philosophy at Télécom École de Management, describes his own approach as acting as “an observer who needs to keep his distance from the general hype over anything new”. Over the past several years he has worked on human-computer interactions and the issues of artificial intelligence (AI), examining the potentially negative effects of automation.

In particular, he analyses the problematic issues arising from legal frameworks established to govern AI. His particular focus is on “ethics by design”. This movement involves integrating the consideration of ethical aspects at the design stage for algorithms, or smart machines in general. Although this approach initially seems to reflect the importance that manufacturers and developers may attach to ethics, according to the researcher, “this approach can paradoxically be detrimental.


Ethics by design: the same limitations as privacy by design?

To illustrate his thinking, Armen Khatchatourov uses the example of a similar concept – the protection and privacy of personal information: just like ethics, this subject raises the issue of how we treat other people. “Privacy by design” appeared near the end of the 1990s, in reaction to legal challenges in regulating digital technology. It was presented as a comprehensive analysis of how to integrate the issues of personal information protection and privacy into product development and operational processes. “The main problem is that today, privacy by design has been reduced to a legal text,” regrets the philosopher, referring to the General Data Protection Regulation (GDPR) adopted by European Parliament. “And the reflections on ethics are heading in the same direction,” he adds.


There is the risk of losing our ability to think critically.


In his view, the main negative aspect of this type of standardized regulation implemented via a legal text is that it eliminates the stakeholders’ feeling of responsibility. “On the one hand, there is the risk that engineers and designers will be happy simply to agree with the text,” he explains. “On the other hand, the consumers will no longer think about what they are doing, and trust the labels attributed by regulators.” Behind this standardization, “there is the risk of losing our ability to think critically.” And he concludes by asking “Do we really think about what we’re doing every day on the Web, or are we simply guided by the growing tendency toward normativeness.”

The same threat exists for ethics. The mere fact of formalizing it in a legal text would work against the reflexivity it promotes. “This would bring ethical reflection to a halt,” warns Armen Khatchatourov. He expands on his thinking by referring to work done by artificial intelligence developers. There always comes a moment when the engineer must translate ethics into a mathematical formula to be used in an algorithm. In practical terms, this can take the form of an ethical decision based on a structured representation of knowledge (ontology, in computer language). “But things truly become problematic if we reduce ethics to a logical problem!” the philosopher emphasizes. “For a military drone, for example, this would mean defining a threshold for civilian deaths, below which the decision to fire is acceptable. Is this what we want? There is no ontology for ethics, and we should not take that direction.

And military drones are not the only area involved. The development of autonomous, or driverless, cars involves many questions regarding how a decision should be made. Often, ethical reflections pose dilemmas. The archetypal example is that of a car heading for a wall that it can only avoid by running over a group of pedestrians. Should it sacrifice its passenger’s life, or save the passenger at the expense of the pedestrians’ lives? There are many different arguments. A pragmatic thinker would focus on the number of lives. Others would want the car to save the driver no matter what. The Massachusetts Institute of Technology (MIT) has therefore developed a digital tool – the Moral Machine – which presents many different practical cases and gives choices to Internet users. The results vary significantly according to the individual. This shows that, in the case of autonomous cars, it is impossible to establish universal ethical rules.


Ethics are not a product

Still according to the analogy between ethics and data protection, Armen Khatchatourov brings up another point, based on the reflections of Bruce Schneier, a specialist in computer security. He describes computer security as a process, not a product. Consequently, it cannot be completely guaranteed by a one-off technical approach, or by a legislative text, since both are only valid at certain point in time. Although updates are possible, they often take time and are therefore out of step with current problems. “The lesson we can learn from computer security is that we cannot trust a ready-made solution, and that we need to think in terms of processes and attitudes to be adopted. If we make the comparison, the same can be said for the ethical issues raised by AI,” the philosopher points out.

This is why it is advantageous to think about the framework for processes like privacy and ethics in a different context than the legal one. Yet Armen Khatchatourov recognizes the need for these legal aspects: “A regulatory text is definitely not the solution for everything, but it is even more problematic if no legislative debate exists, since the debate reveals a collective awareness of the issue.” This clearly shows the complexity of a problem to which no one has yet found a solution.


[1] GAFA: an acronym for Google, Apple, Facebook, and Amazon.

[box type=”shadow” align=”” class=”” width=””]

Artificial intelligence at Institut Mines-Télécom

The 8th Fondation Télécom brochure, published  (in French) in June 2016, is dedicated to artificial intelligence (AI). It presents an overview of the research taking place in this area throughout the world, and presents the vast body of research underway at Institut Mines-Télécom schools. In 27 pages, this brochure defines intelligence (rational, naturalistic, systematic, emotional, kinesthetic…), looks back at the history of AI, questions its emerging potential, and looks at how it can be used by humans.[/box]


Société Générale, Cybersecurity chair

The Cybersecurity of Critical Infrastructures Chair welcomes new partner Société Générale

The Institut Mines-Télécom Cybersecurity Chair, launched last year on January 25 as part of the dynamic created by the Center of Cyber Excellence, is aimed at contributing to the international development of research activities and training opportunities in an area that has become a national priority: the cybersecurity of critical infrastructures (energy networks, industrial processes, water production plants, financial systems, etc.). The projects are in full flow, with the addition of a new partner company, Société Générale, and the launch of 3 new research topics.


Société Générale Group, an 8th partner joins the Chair

Nicolas Bourot, Chief Information Security Officer (CISO) and Operation Risk Manager (ORM) for the Group’s infrastructures explains, “We are all affected, a lot is at stake. In joining this Chair, Société Générale Group seeks to ensure it will have the necessary means to support the digital transformation.” In the words of Paul-André Pincemin, General Coordinator of the Center of Cyber Excellence, “This Chair is truly a task force, bringing together partners from both the academic and industrial sectors.”

The Chair is led by Télécom Bretagne, in collaboration with Télécom ParisTech and Télécom SudParis, the Region of Brittany, as part of the Center of Cyber Excellence, and 8 companies: Airbus Defence and Space, Amossys, BNP Paribas, EDF, La Poste, Nokia, Orange and now Société Générale Group.


Launch of 3 research topics

Simultaneously with the arrival of the new partner, three research topics have been launched. The objective of the first one is to develop an analytic capacity for system malfunctions, with the aim of identifying whether they are accidental or the result of malicious intent. This analytic capacity is crucial to assist operators in providing an adapted response, particularly when the malfunction is the result of a simultaneous or coordinated attack. The digitization of industrial control systems, and the systems’ ever increasing connection with cyberspace, does indeed create new vulnerabilities, as evidenced over the past few years by several incidents, with some even capable of destroying the production tool.

The second axis is about developing methods and decision-making tools for securing vitally important systems. The great heterogeneity of components, constraints that are technical (time constraints, communication rates, etc.), topological (physical access to the power plants, geographic distribution within the networks, etc.) and organizational in nature (existence of multiple players, regulations, etc.) represent obstacles that prevent traditional security approaches from being directly applied. The objective of this research is to provide vitally important operators with a methodology and associated tools that simplify the decision-making process during both the phases of security policy definition and the response to incidents.

Finally, the chair will also start the co-simulation of cyber-attacks to network control systems. This third subject involves improving the resilience of critical infrastructures, i.e. their capacity to continue working, potentially in downgraded mode, when affected by cyber-attacks. The research is specifically focused on network control systems that enable a connection between the different components of a traditional control system, such as the controllers, sensors and the actuators. The objective is therefore to advance developments in this area by offering innovation solutions that reconcile requirements for safety, operational security, and continuity of service.

[divider style=”solid” top=”5″ bottom=”5″]

In the coming months…

Chair partners will contribute to major upcoming gatherings:

– CRiSIS, the 11th edition of the Conference on risks and the security of information systems, taking place next September 5-9 in Roscoff

– RAID, the 19th edition of the Conference on research on attacks, intrusions and defense, taking place September 19-21 in Evry (Télécom SudParis):

– Cybersecurity event, Les Assises de la cybersécurité, taking place October 5-8 in Monaco

– European Cyber Week, November 21-25 in Rennes

[divider style=”solid” top=”5″ bottom=”5″]

Ontologies: powerful decision-making support tools

Searching for, extracting, analyzing, and sharing information in order to make the right decision requires great skill. For machines to provide human operators with valuable assistance in these highly-cognitive tasks, they must be equipped with “knowledge” about the world. At Mines Alès, Sylvie Ranwez has been developing innovative processing solutions based on ontologies for many years now.


How can we find our way through the labyrinth of the internet with its overwhelming and sometimes contradictory information? And how can we trust extracted information that can then be used as the basis for reasoning integrated in decision-making processes? For a long time, the keyword search method was thought to be the best solution, but in order to tackle the abundance of information and its heterogeneity, current search methods favor taking domain ontology-based models into consideration. Since 2012, Sylvie Ranwez has been building on this idea through research carried out at Mines Alès, in the KID team (Knowledge representation and Image analysis for Decision). This team strives to develop models, methods, and techniques to assist human operators confronted with mastering a complex system, whether technical, social, or economic, particularly within a decision-making context. Sylvie Ranwez’s research is devoted to using ontologies to support interaction and personalization in such settings.

The philosophical concept of ontology is the study of the being as an entity, as well as its general characteristics. In computer science, ontology describes the set of concepts, along with their properties and interrelationships within a particular field of knowledge in such a way that they may be analyzed by humans as well as by computers. “Though the problem goes back much further, the name ontology started being used in the 90s,” notes Sylvie Ranwez. “Today many fields have their own ontology“. Building an ontology starts off with help from experts in a field who know about all the entities which characterize it, as well as their links, thus requiring meetings, interviews, and some back-and-forth in order to best understand the field concerned. Then the concepts are integrated into a coherent set, and coded.


More efficient queries

This knowledge can then be integrated into different processes, such as resource indexing and searching for information. This leads to queries with richer results than when using the keyword method. For example, the PubMed database, which lists all international biomedical publications, relies on MeSH (Medical Subject Headings), making it possible to index all biomedical publications and facilitate queries.

In general, the building of an ontology begins with an initial version containing between 500 and 3,000 concepts and it expands through user feedback. The Gene Ontology, which is used by biologists from around the world to identify and annotate genes, currently contains over 30,000 concepts and is still growing. “It isn’t enough to simply add concepts,” warns Sylvie Ranwez, adding: “You have to make sure an addition does not modify the whole.”


[box type=”shadow” align=”” class=”” width=””]

Harmonizing disciplines

Among the studies carried out by Sylvie Ranwez, ToxNuc-E (nuclear and environmental toxicology) brought together biologists, chemists and physicists from the CEA, INSERM, INRA and CNRS. But the definition of certain terms differs according to the discipline, and reciprocally, the same term may have two different definitions. The ToxNuc-E group called upon Sylvie Ranwez and Mines Alès in order to describe the study topic, but also to help these researchers from different disciplines share common values. The ontology of this field is now online and used to index the project’s scientific documents. Specialists from fields possessing ontologies often point to their great contribution to harmonizing their discipline. Once they have ontologies, different processing methods are possible, often based on measurements of semantic similarity (the topic of Sébastien Harispe’s PhD, which led to a publication of a work in English) ranging from resource indexation, to searching for information, or classification (work by Nicolas Fiorini, during his PhD supervised by Sylvie Ranwez). [/box]


Specific or generic ontologies

The first ontology Sylvie Ranwez tackled, while working on her PhD at the Laboratory of Computer and Production Engineering (LGI2P) at Mines Alès, concerned music, a field with which she is very familiar since she is an amateur singer. Well before the arrival of MOOCs, the goal was to model both the fields of music and teaching methods in order to offer personalized distance learning courses about music. She then took up work on football, at the urging of PhD director Michel Crampes. “Right in the middle of the World Cup, the goal was to be able to automatically generate personalized summaries of games,” she remembers. She went on to work on other subjects with private companies or research institutes like the CEA (French Atomic Energy Commission). Another focus of Sylvie Ranwez’s research is ontology learning, which would make it possible to build ontologies automatically by analyzing texts. However, it is very difficult to change words into concepts because of the inherent ambiguity of wording. Human beings are still essential.

Developing an ontology for every field and for different types of applications is a costly and time-consuming process since it requires many experts and assumes they can reach a consensus. Research has thus begun involving what are referred to as “generic” ontologies. Today DBpedia, which was created in Germany using knowledge from Wikipedia, covers many fields and is based on such an ontology. During a web search, this results in the appearance of generic information on the requested subject in the upper right corner of the results page. For example, “Pablo Ruiz Picasso, born in Malaga, Spain on 25 October 1881 and deceased on 8 April 1973 in Mougins, France. A Spanish painter, graphic artist and sculptor who spent most of his life in France.


Measuring accuracy

This multifaceted information, spread out over the internet is not, however, without its problems: the reliability of the information can be questioned. Sylvie Ranwez is currently working on this problem. In a semantic web context, data is open and different sources may claim  contradictory information at times. How then is it possible to detect true facts among those data? The usual statistical approach (where the majority is right) is biased. Simply spamming false information can give it the majority. With ontologies, information is confirmed by the entire set of concepts, which are interlinked, making false information easier to detect. Similarly, an issue addressed by Sylvie Ranwez’s team concerns the detection and management of uncertainty. For example, one site claims that a certain medicine cures a certain disease, whereas a different site states instead that it “might” cure this disease. And yet, in a decision-making setting it is essential to be able to detect the uncertainty of information and be able to measure it. We are only beginning to tap into the full potential ontologies for extracting, searching for, and analyzing information.


Sylvie Ranwez, Mines AlèsAn original background

Sylvie Ranwez came to research through a roundabout route. After completing her Baccalauréat (French high school diploma) in science, she earned two different university degrees in technology (DUT). The first, a degree in physical measurements, allowed her to discover a range of disciplines including chemistry, optics, and computer science. She then went on to earn a second degree in computer science before enrolling at the EERIÉ engineering graduate school (School of Computer Science and Electronics Research and Studies) in the artificial intelligence specialization. Alongside her third year in engineering, she also earned her post-graduate advanced diploma in computer science. She followed up with a PhD at LGI2P of Mines Alès, spending the first year in Germany at the Digital Equipment laboratory of Karlsruhe. In 2001, just after earning her PhD, and without going through the traditional post-doctoral research apprenticeship abroad, she joined LGI2P’s KID team where she has been accredited to direct research since 2013. In light of her extremely technological world, she has all the makings of a geek. But don’t be fooled – she doesn’t have a cell phone. And she doesn’t want one.