Past, Sophie Bretesché, Mines Nantes, récit, traces, changement

From the vestiges of the past to the narrative: reclaiming time to remember together

The 21st Century is marked by a profound change in our relationship with time, now merely perceived in terms of acceleration, speed, changes and emergencies. At Mines Nantes, the sociologist Sophie Bretesché has positioned herself at the interfaces between the past and the future, where memory and oblivion color our view of the present. In contexts undergoing changes, such as regions and organizations, she examines the vestiges, remnants of the past, giving a different perspective on the dialectic of oblivion and memory. She analyzes the essential role of collective memory and shared narrative in preserving identity in situations of organizational change and technological transformation.

 

A modern society marked by fleeting time

Procrastination Day, Slowness Day, getting reacquainted with boredom… many attempts have been made to try to slow down time and stop it slipping through our fingers. They convey a relationship with time that has been shattered, from the simple rhythm of Nature to that marked by the clock of the industrial era, now a combination of acceleration, motion and real time. This transformation is indicative of how modern society functions, where “that which is moving has substituted past experience, the flexible is replacing the established, the transgressive is ousting the transmitted“, observes Sophie Bretesché.

The sociologist starts from this simple question: the loss of time. What dynamics are involved in this phenomenon corresponding to an acceleration and a compression of work, family and leisure time, these objects of time reflective of our social practices?

One reason frequently cited for this racing of time is the unavoidable presence of new technologies arriving synchronously into our lives, and the frenetic demand for increasingly high productivity. However, this explanation is lacking, confusing correlation and causality. The reality is that of a sum of constant technological and managerial changes which “prevents consolidation of knowledge and experience-building as a sum of knowledge” explains the researcher, who continues: “Surrounded, in the same unit of time, by components with distinct temporal rhythms and frameworks, the subject is both cut off from his/her past and dispossessed of any ability to conceive the future.

To understand what is emerging and observed implicitly in reality, an unprecedented relationship with time-history and with our memory, the sociologist has adopted the theory that it is not so much acceleration that is posing a problem, but the ability of a society to remember and to forget. “Placing the focus on memory and oblivion“, accepting that “the present remains inhabited by vestiges of our past“, and grasping that “the processes of change produce material vestiges of the past which challenge the constants of the past“, are thus part of a process to regain control of time.

 

Study in search of vestiges of the past

This fleeting time is observed clearly in organizations and regions undergoing changes“, notes Sophie Bretesché, and she took three of these fields of study as a starting point in her search for evidence. Starting from “that which grates, resists, arouses controversy, fields in which nobody can forget, or remember together“, she searches for vestiges which are material and tangible signs of an intersection between the past and the future. First of all, she meets executives faced with information flows which are impossible to regulate, and an organization in which the management structure has changed three times in ten years. The sociologist conducts interviews with the protagonists and provides a clearer understanding of the executives’ activities, demonstrating that professions have nonetheless continued to exist, following alternative paths. A third study conducted on residents in the vicinity of a former uranium mine leads her to meet witnesses of a bygone era. Waste formerly used by the local residents is now condemned for its inherent risks. These vestiges of the past are also those of a modern era where risk is the subject of harsh criticism.

In the three cases, the sociology study partakes in the stories of those involved, conducted over long periods. While a sociologist usually presents study results in the form of a cohesive narrative based on theories and interpretations, a social change expert does not piece together a story retrospectively, but analyzes movement and how humans in society construct their temporalities, with the sociological narrative becoming a structure for temporal mediation.

These different field studies demonstrate that it is necessary to “regain time for the sake of time“. This is a social issue, “to gain meaning, reclaim knowledge and give it meaning based on past experience.” Another result is emerging: behind the outwardly visible movements, repeated changes, we will find constants which tend to be forgotten, forms of organization. In addition, resistance to change, which is now stigmatized, could after all have positive virtues, as it is an expression of a deeply rooted culture, based on a collective identity that it would be a shame to deny ourselves.

 

A narrative built upon a rightful collective memory

This research led to Sophie Bretesché taking the helm at Mines Nantes of the “Emerging risks and technologies: from technological management to social regulation” Chair, set up in early 2016. Drawing on ten years of research between the social science and management department and the physics and chemistry laboratories at Mines Nantes, this chair focuses on forms of regulation of risk in the energy, environmental and digital sectors. The approach is an original one in that these questions are no longer considered from a scientific perspective alone, because it is a fact that these problems affect society.

The social acceptability of the atom in various regions demonstrated, for example, that the cultural relationship with risk cannot be standardized universally. While, in Western France, former uranium mines have been rehabilitated within lower-intensity industrial or agricultural management, they have been subject to moratoriums in the Limousin region, where their spaces are now closed-off. These lessons on the relationship with risk are compiled with a long-term view. In this instance, the initial real estate structures offer explanations bringing different stories to light which need to be pieced together in the form of narratives.

Indeed, in isolation, the vestiges of the past recorded during the studies do not yet form shared memories. They are merely individual perceptions, fragile due to their lack of transfer to the collective. “We remember because those around us help“, reminds the researcher, continuing: “the narrative is heralded as the search for the rightful memory“. In a future full of uncertainty, in “a liquid society diluted in permanent motion“, the necessary construction of collective narratives – and not storytelling – allows us to look to the future.

The researcher, who enjoys being at the interfaces of different worlds, takes delight in the moment when the vestiges of the past gradually make way for the narrative, where the threads of sometimes discordant stories start to become meaningful. The resulting embodied narrative is the counterpoint created from the tunes collected from the material vestiges of the past: “Accord is finally reached on a shared story“, in a way offering a new shared commodity.

With a longstanding interest in central narratives of the past, Sophie Bretesché expresses one wish: to convey and share these multiple experiences, times and tools for understanding, these histories of changes in progress, in a variety of forms such as the web documentary or the novel.

 

Sophie Bretesché, Mines NantesSophie Bretesché is a Research Professor of Sociology at Mines Nantes. Head of the regional chair in “Risks, emerging technologies and regulation“, she is co-director of the NEEDS (Nuclear, Energy, Environment, Waste, Society) program and coordinates the social science section of the CNRS “Uranium Mining Regions” Workshop. Her research encompasses time and technologies, memory and changes, professional identities and business pathways. An author of 50 submissions in her field, co-director of two publications, “Fragiles competences” and “Le nucléaire au prisme du temps”, and author of “Le changement au défi de la mémoire“, published by Presses des Mines, she is also involved at the Paris Institute of Political Studies in two Executive Master’s programs (Leading change and Leadership pathways).

 

 

Trubo codes, Claude Berrou, Quèsaco, IMT Atlantique

What are turbo codes?

Turbo codes form the basis of mobile communications in 3G and 4G networks. Invented in 1991 by Claude Berrou, and published in 1993 with Alain Glavieux and Punya Thitimajshima, they have now become a reference point in the field of information and communication technologies. As Télécom Bretagne, birthplace of these “error-correcting codes”, prepares to host the 9th international symposium on turbo codes, let’s take a closer look at how these codes work and the important role they play in our daily lives.

 

What do error-correcting codes do?

In order for communication to take place, three things are needed: a sender, a receiver, and a channel. The most common example is that of a person who speaks, sending a signal to someone who is listening, by means of the air conveying the vibrations and forming the sound wave. Yet problems can quickly arise in this communication if other people are talking nearby – making noise.

To compensate for this difficulty, the speaker may decide to yell the message. But the speaker could also avoid shouting, by adding a number after each letter in the message, corresponding to the letter’s place in the alphabet. The listener receiving the information will then have redundant information for each part of the message — in this case, double the information. If noise alters the way a letter is transmitted, the number can help to identify it.

And what role do turbo codes play in this?

In the digital communications sector, there are several error-correcting codes, with varying levels of complexity. Typically, repeating the same message several times in binary code is a relatively safe bet, yet it is extremely costly in terms of bandwidth and energy consumption.

Turbo codes are a much more developed way of integrating information redundancy. They are based on the transmission of the initial message in three copies. The first copy is the raw, non-encoded information. The second is modified by encoding each bit of information using an algorithm shared by the coder and decoder. Finally, another version of the message is also encoded, but after modification (specifically, a permutation). In this third case, it is no longer the original message that is encoded and then sent, but rather a transformed version. These three versions are then decoded and compared in order to find the original message.

Where are turbo codes used?

In addition to being used to encode all our data in 3G and 4G networks, turbo codes are also used in many different fields. NASA uses them for its communication with space probes which have been built since 2003. The space community, which has to contend with many constraints on communication processes, is particularly fond of these codes, as ESA also uses them for many of its probes. But more generally, turbo codes represent a safe and efficient encoding technique in most communication technologies.

Claude Berrou, inventor of turbo codes

How have turbo codes become so successful?

In 1948, American engineer and mathematician Claude Shannon proposed a theorem stating that codes always exist that are capable of minimizing channel-related transmission errors, up to a certain level of disturbance. In other words, Shannon asserted that, despite the noise in a channel, the transmitter will always be able to transmit an item of information to the receiver, almost error-free, when using efficient codes.

The turbo codes developed by Claude Berrou in 1991 meet these requirements, and are close to the theoretical limit for information transmitted with an error rate close to zero. Therefore, they represent highly efficient error-correcting codes. His experimental results, which validated Shannon’s theory, earned Claude Berrou the Marconi Prize in 2005 – the highest scientific distinction in the field of communication sciences. His research earned him a permanent membership position in the French Academy of Sciences.

 

[box type=”info” align=”” class=”” width=””]

Did you know?

The international alphabet (or the NATO phonetic alphabet) is an error-correcting code. Every letter is in fact encoded as word beginning with that letter. Thus ‘N’ and ‘M’ become ‘November’ and ‘Mike’. This technique prevents much confusion, particularly in radio communications, which often involve noise.[/box]

 

Artificial Intelligence: the complex question of ethics

The development of artificial intelligence raises many societal issues. How do we define ethics in this area? Armen Khatchatourov, a philosopher at Télécom École de Management and member of the IMT chair “Values and Policies of Personal Information”, observes and carefully analyzes the proposed answers to this question. One of his main concerns is the attempt to standardize ethics using legislative frameworks.

In the frantic race for artificial intelligence, driven by GAFA[1], with its increasingly efficient algorithms and ever-faster automated decisions, engineering is king, supporting this highly-prized form of innovation. So, does philosophy still have a role to play in this technological world that places progress at the heart of every objective? Perhaps that of a critical observer. Armen Khatchatourov, a researcher in philosophy at Télécom École de Management, describes his own approach as acting as “an observer who needs to keep his distance from the general hype over anything new”. Over the past several years he has worked on human-computer interactions and the issues of artificial intelligence (AI), examining the potentially negative effects of automation.

In particular, he analyses the problematic issues arising from legal frameworks established to govern AI. His particular focus is on “ethics by design”. This movement involves integrating the consideration of ethical aspects at the design stage for algorithms, or smart machines in general. Although this approach initially seems to reflect the importance that manufacturers and developers may attach to ethics, according to the researcher, “this approach can paradoxically be detrimental.

 

Ethics by design: the same limitations as privacy by design?

To illustrate his thinking, Armen Khatchatourov uses the example of a similar concept – the protection and privacy of personal information: just like ethics, this subject raises the issue of how we treat other people. “Privacy by design” appeared near the end of the 1990s, in reaction to legal challenges in regulating digital technology. It was presented as a comprehensive analysis of how to integrate the issues of personal information protection and privacy into product development and operational processes. “The main problem is that today, privacy by design has been reduced to a legal text,” regrets the philosopher, referring to the General Data Protection Regulation (GDPR) adopted by European Parliament. “And the reflections on ethics are heading in the same direction,” he adds.

 

There is the risk of losing our ability to think critically.

 

In his view, the main negative aspect of this type of standardized regulation implemented via a legal text is that it eliminates the stakeholders’ feeling of responsibility. “On the one hand, there is the risk that engineers and designers will be happy simply to agree with the text,” he explains. “On the other hand, the consumers will no longer think about what they are doing, and trust the labels attributed by regulators.” Behind this standardization, “there is the risk of losing our ability to think critically.” And he concludes by asking “Do we really think about what we’re doing every day on the Web, or are we simply guided by the growing tendency toward normativeness.”

The same threat exists for ethics. The mere fact of formalizing it in a legal text would work against the reflexivity it promotes. “This would bring ethical reflection to a halt,” warns Armen Khatchatourov. He expands on his thinking by referring to work done by artificial intelligence developers. There always comes a moment when the engineer must translate ethics into a mathematical formula to be used in an algorithm. In practical terms, this can take the form of an ethical decision based on a structured representation of knowledge (ontology, in computer language). “But things truly become problematic if we reduce ethics to a logical problem!” the philosopher emphasizes. “For a military drone, for example, this would mean defining a threshold for civilian deaths, below which the decision to fire is acceptable. Is this what we want? There is no ontology for ethics, and we should not take that direction.

And military drones are not the only area involved. The development of autonomous, or driverless, cars involves many questions regarding how a decision should be made. Often, ethical reflections pose dilemmas. The archetypal example is that of a car heading for a wall that it can only avoid by running over a group of pedestrians. Should it sacrifice its passenger’s life, or save the passenger at the expense of the pedestrians’ lives? There are many different arguments. A pragmatic thinker would focus on the number of lives. Others would want the car to save the driver no matter what. The Massachusetts Institute of Technology (MIT) has therefore developed a digital tool – the Moral Machine – which presents many different practical cases and gives choices to Internet users. The results vary significantly according to the individual. This shows that, in the case of autonomous cars, it is impossible to establish universal ethical rules.

 

Ethics are not a product

Still according to the analogy between ethics and data protection, Armen Khatchatourov brings up another point, based on the reflections of Bruce Schneier, a specialist in computer security. He describes computer security as a process, not a product. Consequently, it cannot be completely guaranteed by a one-off technical approach, or by a legislative text, since both are only valid at certain point in time. Although updates are possible, they often take time and are therefore out of step with current problems. “The lesson we can learn from computer security is that we cannot trust a ready-made solution, and that we need to think in terms of processes and attitudes to be adopted. If we make the comparison, the same can be said for the ethical issues raised by AI,” the philosopher points out.

This is why it is advantageous to think about the framework for processes like privacy and ethics in a different context than the legal one. Yet Armen Khatchatourov recognizes the need for these legal aspects: “A regulatory text is definitely not the solution for everything, but it is even more problematic if no legislative debate exists, since the debate reveals a collective awareness of the issue.” This clearly shows the complexity of a problem to which no one has yet found a solution.

 

[1] GAFA: an acronym for Google, Apple, Facebook, and Amazon.

[box type=”shadow” align=”” class=”” width=””]

Artificial intelligence at Institut Mines-Télécom

The 8th Fondation Télécom brochure, published  (in French) in June 2016, is dedicated to artificial intelligence (AI). It presents an overview of the research taking place in this area throughout the world, and presents the vast body of research underway at Institut Mines-Télécom schools. In 27 pages, this brochure defines intelligence (rational, naturalistic, systematic, emotional, kinesthetic…), looks back at the history of AI, questions its emerging potential, and looks at how it can be used by humans.[/box]

 

Société Générale, Cybersecurity chair

The Cybersecurity of Critical Infrastructures Chair welcomes new partner Société Générale

The Institut Mines-Télécom Cybersecurity Chair, launched last year on January 25 as part of the dynamic created by the Center of Cyber Excellence, is aimed at contributing to the international development of research activities and training opportunities in an area that has become a national priority: the cybersecurity of critical infrastructures (energy networks, industrial processes, water production plants, financial systems, etc.). The projects are in full flow, with the addition of a new partner company, Société Générale, and the launch of 3 new research topics.

 

Société Générale Group, an 8th partner joins the Chair

Nicolas Bourot, Chief Information Security Officer (CISO) and Operation Risk Manager (ORM) for the Group’s infrastructures explains, “We are all affected, a lot is at stake. In joining this Chair, Société Générale Group seeks to ensure it will have the necessary means to support the digital transformation.” In the words of Paul-André Pincemin, General Coordinator of the Center of Cyber Excellence, “This Chair is truly a task force, bringing together partners from both the academic and industrial sectors.”

The Chair is led by Télécom Bretagne, in collaboration with Télécom ParisTech and Télécom SudParis, the Region of Brittany, as part of the Center of Cyber Excellence, and 8 companies: Airbus Defence and Space, Amossys, BNP Paribas, EDF, La Poste, Nokia, Orange and now Société Générale Group.

 

Launch of 3 research topics

Simultaneously with the arrival of the new partner, three research topics have been launched. The objective of the first one is to develop an analytic capacity for system malfunctions, with the aim of identifying whether they are accidental or the result of malicious intent. This analytic capacity is crucial to assist operators in providing an adapted response, particularly when the malfunction is the result of a simultaneous or coordinated attack. The digitization of industrial control systems, and the systems’ ever increasing connection with cyberspace, does indeed create new vulnerabilities, as evidenced over the past few years by several incidents, with some even capable of destroying the production tool.

The second axis is about developing methods and decision-making tools for securing vitally important systems. The great heterogeneity of components, constraints that are technical (time constraints, communication rates, etc.), topological (physical access to the power plants, geographic distribution within the networks, etc.) and organizational in nature (existence of multiple players, regulations, etc.) represent obstacles that prevent traditional security approaches from being directly applied. The objective of this research is to provide vitally important operators with a methodology and associated tools that simplify the decision-making process during both the phases of security policy definition and the response to incidents.

Finally, the chair will also start the co-simulation of cyber-attacks to network control systems. This third subject involves improving the resilience of critical infrastructures, i.e. their capacity to continue working, potentially in downgraded mode, when affected by cyber-attacks. The research is specifically focused on network control systems that enable a connection between the different components of a traditional control system, such as the controllers, sensors and the actuators. The objective is therefore to advance developments in this area by offering innovation solutions that reconcile requirements for safety, operational security, and continuity of service.

[divider style=”solid” top=”5″ bottom=”5″]

In the coming months…

Chair partners will contribute to major upcoming gatherings:

– CRiSIS, the 11th edition of the Conference on risks and the security of information systems, taking place next September 5-9 in Roscoff

– RAID, the 19th edition of the Conference on research on attacks, intrusions and defense, taking place September 19-21 in Evry (Télécom SudParis):

– Cybersecurity event, Les Assises de la cybersécurité, taking place October 5-8 in Monaco

– European Cyber Week, November 21-25 in Rennes

[divider style=”solid” top=”5″ bottom=”5″]

Ontologies: powerful decision-making support tools

Searching for, extracting, analyzing, and sharing information in order to make the right decision requires great skill. For machines to provide human operators with valuable assistance in these highly-cognitive tasks, they must be equipped with “knowledge” about the world. At Mines Alès, Sylvie Ranwez has been developing innovative processing solutions based on ontologies for many years now.

 

How can we find our way through the labyrinth of the internet with its overwhelming and sometimes contradictory information? And how can we trust extracted information that can then be used as the basis for reasoning integrated in decision-making processes? For a long time, the keyword search method was thought to be the best solution, but in order to tackle the abundance of information and its heterogeneity, current search methods favor taking domain ontology-based models into consideration. Since 2012, Sylvie Ranwez has been building on this idea through research carried out at Mines Alès, in the KID team (Knowledge representation and Image analysis for Decision). This team strives to develop models, methods, and techniques to assist human operators confronted with mastering a complex system, whether technical, social, or economic, particularly within a decision-making context. Sylvie Ranwez’s research is devoted to using ontologies to support interaction and personalization in such settings.

The philosophical concept of ontology is the study of the being as an entity, as well as its general characteristics. In computer science, ontology describes the set of concepts, along with their properties and interrelationships within a particular field of knowledge in such a way that they may be analyzed by humans as well as by computers. “Though the problem goes back much further, the name ontology started being used in the 90s,” notes Sylvie Ranwez. “Today many fields have their own ontology“. Building an ontology starts off with help from experts in a field who know about all the entities which characterize it, as well as their links, thus requiring meetings, interviews, and some back-and-forth in order to best understand the field concerned. Then the concepts are integrated into a coherent set, and coded.

 

More efficient queries

This knowledge can then be integrated into different processes, such as resource indexing and searching for information. This leads to queries with richer results than when using the keyword method. For example, the PubMed database, which lists all international biomedical publications, relies on MeSH (Medical Subject Headings), making it possible to index all biomedical publications and facilitate queries.

In general, the building of an ontology begins with an initial version containing between 500 and 3,000 concepts and it expands through user feedback. The Gene Ontology, which is used by biologists from around the world to identify and annotate genes, currently contains over 30,000 concepts and is still growing. “It isn’t enough to simply add concepts,” warns Sylvie Ranwez, adding: “You have to make sure an addition does not modify the whole.”

 

[box type=”shadow” align=”” class=”” width=””]

Harmonizing disciplines

Among the studies carried out by Sylvie Ranwez, ToxNuc-E (nuclear and environmental toxicology) brought together biologists, chemists and physicists from the CEA, INSERM, INRA and CNRS. But the definition of certain terms differs according to the discipline, and reciprocally, the same term may have two different definitions. The ToxNuc-E group called upon Sylvie Ranwez and Mines Alès in order to describe the study topic, but also to help these researchers from different disciplines share common values. The ontology of this field is now online and used to index the project’s scientific documents. Specialists from fields possessing ontologies often point to their great contribution to harmonizing their discipline. Once they have ontologies, different processing methods are possible, often based on measurements of semantic similarity (the topic of Sébastien Harispe’s PhD, which led to a publication of a work in English) ranging from resource indexation, to searching for information, or classification (work by Nicolas Fiorini, during his PhD supervised by Sylvie Ranwez). [/box]

 

Specific or generic ontologies

The first ontology Sylvie Ranwez tackled, while working on her PhD at the Laboratory of Computer and Production Engineering (LGI2P) at Mines Alès, concerned music, a field with which she is very familiar since she is an amateur singer. Well before the arrival of MOOCs, the goal was to model both the fields of music and teaching methods in order to offer personalized distance learning courses about music. She then took up work on football, at the urging of PhD director Michel Crampes. “Right in the middle of the World Cup, the goal was to be able to automatically generate personalized summaries of games,” she remembers. She went on to work on other subjects with private companies or research institutes like the CEA (French Atomic Energy Commission). Another focus of Sylvie Ranwez’s research is ontology learning, which would make it possible to build ontologies automatically by analyzing texts. However, it is very difficult to change words into concepts because of the inherent ambiguity of wording. Human beings are still essential.

Developing an ontology for every field and for different types of applications is a costly and time-consuming process since it requires many experts and assumes they can reach a consensus. Research has thus begun involving what are referred to as “generic” ontologies. Today DBpedia, which was created in Germany using knowledge from Wikipedia, covers many fields and is based on such an ontology. During a web search, this results in the appearance of generic information on the requested subject in the upper right corner of the results page. For example, “Pablo Ruiz Picasso, born in Malaga, Spain on 25 October 1881 and deceased on 8 April 1973 in Mougins, France. A Spanish painter, graphic artist and sculptor who spent most of his life in France.

 

Measuring accuracy

This multifaceted information, spread out over the internet is not, however, without its problems: the reliability of the information can be questioned. Sylvie Ranwez is currently working on this problem. In a semantic web context, data is open and different sources may claim  contradictory information at times. How then is it possible to detect true facts among those data? The usual statistical approach (where the majority is right) is biased. Simply spamming false information can give it the majority. With ontologies, information is confirmed by the entire set of concepts, which are interlinked, making false information easier to detect. Similarly, an issue addressed by Sylvie Ranwez’s team concerns the detection and management of uncertainty. For example, one site claims that a certain medicine cures a certain disease, whereas a different site states instead that it “might” cure this disease. And yet, in a decision-making setting it is essential to be able to detect the uncertainty of information and be able to measure it. We are only beginning to tap into the full potential ontologies for extracting, searching for, and analyzing information.

 

Sylvie Ranwez, Mines AlèsAn original background

Sylvie Ranwez came to research through a roundabout route. After completing her Baccalauréat (French high school diploma) in science, she earned two different university degrees in technology (DUT). The first, a degree in physical measurements, allowed her to discover a range of disciplines including chemistry, optics, and computer science. She then went on to earn a second degree in computer science before enrolling at the EERIÉ engineering graduate school (School of Computer Science and Electronics Research and Studies) in the artificial intelligence specialization. Alongside her third year in engineering, she also earned her post-graduate advanced diploma in computer science. She followed up with a PhD at LGI2P of Mines Alès, spending the first year in Germany at the Digital Equipment laboratory of Karlsruhe. In 2001, just after earning her PhD, and without going through the traditional post-doctoral research apprenticeship abroad, she joined LGI2P’s KID team where she has been accredited to direct research since 2013. In light of her extremely technological world, she has all the makings of a geek. But don’t be fooled – she doesn’t have a cell phone. And she doesn’t want one.

Pokémon Go, Télécom SudParis, Marius Preda, Augmented reality

What is Augmented Reality?

Since its launch, the Pokémon Go game has broken all download records. More than just a fun gaming phenomenon, it is above all an indicator of the inevitable arrival of augmented reality technology in our daily lives. Marius Preda, a researcher at Télécom SudParis and an expert on the subject, explains exactly what the term “augmented reality” means.

 

Does just adding a virtual object to a real-time video make something augmented reality?

Marius Preda: If the virtual object is spatially and temporally synchronized with reality, yes. Based on the academic definition, augmented reality is the result of a mixed perception between the real and virtual worlds. The user observes both a real source, and a second source provided by a computer. In the case of Pokémon Go, there is a definite synchronism between the environment filmed with the camera — which changes according to the phone’s position — and the virtual Pokémon that appear and stay in their location.

 

How is this synchronization guaranteed?

MP: The Pokémon Go game works via geolocation: it uses GPS coordinates to make a virtual object appear at a location in the real environment. But during the Pokémon capture phase, the virtual object does not interact with the real image.

Very precise visual augmented realities exist, which attain synchronization in another way. They are based on the recognition of patterns that have been pre-recorded in a database. It is then possible to replace real objects with virtual objects, or to make 3D objects interact with forms in the real environment. These methods are expensive, however, since they require more in-depth learning phases and image processing.

 

Is it accurate to say that several augmented realities exist? 

MP: We can say that there are several ways to ensure the synchronization between the real and virtual worlds. Yet in a broader sense, mixed reality is a continuum between two extremes: pure reality on the one hand, and synthetically produced images on the other. Between these two extremes we find augmented reality, as well as other nuances. If we imagine a completely virtual video game, only with the real player’s face replacing that of the avatar, this is augmented virtuality. Therefore, augmented reality is a point on this continuum, in which synthetically generated objects appear in the real world.

 

Apart from video games, what other sectors are interested in augmented reality applications?

MP: There is a huge demand among professionals. Operators of industrial facilities can typically benefit from augmented reality for repairs. If they do not know how to install a part, they can receive help from virtual demonstrations carried out directly on the machine in front of them.

There is also high demand from architects. They already use 3D models to show their projects to decision-makers who decide whether or not to approve the construction of a building. Yet now they would like to show a future building at its future location using augmented reality, with the right colors, and lighting on the façades, etc.

Of course, such applications have enormous market potential. By monetizing a location in an augmented reality application like Pokémon Go, Google could very well offer game areas located directly in stores.

[box type=”shadow” align=”” class=”” width=””]

A MOOC for developing augmented reality applications

Interested in learning about and creating augmented reality experiences? In augmenting a book, a map, or designing a geolocation quiz? Institut Mines-Télécom is offering a new MOOC to make this possible. It will enable learners to experiment and create several augmented reality applications.

This introductory MOOC, entitled Getting started with augmented reality, is intended for web production professionals, as well as anyone interested in designing innovative experiences using interactions between the virtual and real worlds: web journalists, mobile application developers, students from technical schools, and art and design schools… as well as teachers. Without having any prior experience in computer programming, the learner will easily be able to use the augmented reality prototyping tools.[/box]

Read more on our blog

Bitcoin, blockchain, Michel Berne, Fabrice Flipo

The bitcoin and blockchain: energy hogs

By Fabrice Flipo and Michel Berne, researchers at Télécom École de Management.
Editorial originally published in
French in The Conversation France

_______________________________________________________________________________________

 

The digital world still lives under the illusion that it is intangible. As governments gathered in Paris at COP21, pledging to reduce their carbon emissions to keep global warming below 2°C, the spread of digital technology continues to take place without the slightest concern for the environment. The current popularity of the bitcoin and blockchain provide the perfect example.

The principle of the blockchain can be summarized as follows: each transaction is recorded in thousands of accounting ledgers, and each one is scrutinized by a different observer. Yet no mention is made of the energy footprint of this unprecedented ledger of transactions, or of the energy footprint of the new “virtual currency” (the bitcoin) it manages.

Read the blog post What is a blockchain?

 

Electricity consumption equivalent to that of Ireland

In a study published in 2014, Karl J. O’Dwyer and David Malone showed that the consumption of the bitcoin network was likely to be approximately equivalent to the electricity consumption of a country like Ireland, i.e. an estimated 3 GW.

Imagine the consequences if this type of bitcoin currency becomes widespread. The global money supply in circulation is estimated at $11,000 billion. The corresponding energy consumption should therefore exceed 4,000 GW, which is 8 times the electricity consumption of France and twice that of the United States. It is not without reason that a recent headline on the Novethic website proclaimed “The bitcoin, a burden for the climate”.

 

What do the numbers say?

Since every blockchain is a ledger (and therefore a file) that exists in many copies, the computer resources required for the calculation, transmission and storage of the information increases, as well as the energy footprint, even if improvements in the underlying technologies are taken into account.

The two important factors here are the length of the blockchain and the number of copies. For the bitcoin, the blockchain’s length grew very quickly: according to Quandl, it was 27 GB in early 2015 and rose to 74 by mid-2016.

The bitcoin, whose system is modeled on that of the former gold standard currencies, is generated through complex computer transactions, which become increasingly complex over time, as for an increasingly depleted goldmine in which production costs rise.

In 2015, Genesis Mining revealed in Business Insider that it was one of the most energy-consuming companies in Iceland, with electricity costs of 60 dollars per “extracted” bitcoin– despite benefiting from a low price per kWh and a favorable climate.

Finally, we can also imagine all the “smart contract” type applications supported by the Internet of Things. This will also have a considerable impact on energy and the environment, considering the manufacturing requirements, the electrical supply (often autonomous, and therefore complicated and not very efficient) and disposal.

However, although the majority of connected objects will probably not support smart contracts, a very large amount of connected objects are anticipated in the near future, with a total likely to reach 30 billion in 2020, according to McKinsey, the American consulting firm.

The bitcoin is just one of the many systems being developed without concern for their energy impact. In response to the climate issue, their promoters act as if it does not exist, or as if alternative energy solutions existed.

 

An increasingly high price to pay

Yet decarbonizing the energy system is a vast issue, involving major risks. And the proposed technical solutions in this area offer no guarantees of being able to handle the massive and global increase in energy consumption, while still reducing greenhouse gas emissions.

Digital technology already accounts for approximately 15% of the national electricity consumption in France, and consumes as much energy, on the global scale, as aviation. Today, nothing suggests that there will be a decrease in the mass to be absorbed, nor is there any indication that digital technology will enable a reduction in consumption, as industrialists in this sector have confirmed (see the publication entitled La Face cachée du numérique – “The hidden face of digital technology”).

The massive decarbonization of energy faces many challenges: the reliability of the many different carbon sequestration techniques proposed, the “energy cannibalism” involved in the launch of renewable energies, which require energy to be manufactured and have technical, social, and political limitations (for example, the various sources of renewable energy require large surface areas, yet the space that could potentially be used is largely occupied)… The challenges are huge.

TeraLab

TeraLab and La Poste have teamed up to fight package fraud

As a testament to how valuable data is becoming for companies, La Poste-Colissimo has teamed up with researchers from IMT schools to fight fraud. Through the big data platform TeraLab, launched in 2014, this public-private partnership has made it possible to explore algorithmic solutions to optimize fraud detection. This research demonstrates how important modernization is for organizations.

 

Will data centers replace Sherlock Holmes as the stereotype for a detective? That may sound like an absurd question, but it is a legitimate one in light of La Poste-Colissimo’s choice to turn to a big data platform to fight fraud.  Over the eighteen-month period between January 2014 and June 2015, tens of thousands of euros were paid for claims identified as suspected cases of fraud by the company. Hence its desire to modernize its tools and its technical expertise in fraud detection.

As such, in late 2015 the company decided to work with the TeraLab platform at Institut Mines-Télécom (IMT). La Poste-Colissimo saw this as an opportunity to kill two birds with one stone: “We were seeking both collaboration to help us overcome our difficulties handling very large volumes of data and the possibility of a rapid return on investment,” explains Philippe Aligon, who is in charge of data analysis for La Poste – Colissimo. Working on detecting fraudulent claims relying on false declarations that packages have been dropped off makes it possible to combine these two objectives.

Teaching algorithms to recognize fraud

TeraLab first worked on “securing datasets, to ensure La Poste-Colissimo that it was a safe work environment,” says Anne-Sophie Taillandier, director of the platform. After this technical and legal step, all files related to proven fraud were sent to TeraLab. This was followed by a phase of statistical learning (machine learning) based on this data. “We proposed a system that inputs the characteristics of the claim: the amount of the claim, the weight of the package, the cause for non-delivery etc.,” explains Jérémie Jakubowicz, head of the data science center at TeraLab. Using this model, and the characteristics of any claim, it is possible to deduce the associated probability of fraud.

To support this learning phase, La Poste – Colissimo provided TeraLab with a sample consisting of archived data about suspicious packages between January 2014 and June 2015. The company’s anti-fraud officers had already ranked each of the cases on a scale from 0 to 4, from fraud that had been proven by internal services to a very low risk of fraud. TeraLab’s role was to reproduce the same ranking using the model developed.

After analyzing the sample, the 500 claims considered to be the most suspicious by algorithms were sent to the experts at La Poste – Colissimo. “We didn’t use the same approach as them at all,” says Jérémie Jakubowicz. “The experts work mainly on a geographical area, whereas we use parameters like the weight of the package or the postcode.” Despite this, there was a 99.8% correlation between the results of the experts and the algorithms. Based on the sample provided, 282 new cases which had been considered non-suspicious by the company were identified as fraudulent by the TeraLab team, and were confirmed as such in retrospect by La Poste – Colissimo.

Towards integration in the company

The company has therefore confirmed successful proof of concept. The algorithmic method works, and in addition to providing faster detection, its automation reduces the costs of detecting fraud on a case-by-case basis. “There are very high expectations for customer, security and IT services,” says Philippe Aligon. The implementation of this fraud detection tool will be integrated in the plan to modernize La Poste – Colissimo’s IT services and in the acquisition of new big data technologies to process data in real time, making it possible to assess claims instantaneously.

Due to the complexity of integrating big data tools, it is not yet possible to implement the algorithm on a large scale. This is not unique to La Poste – Colissimo however, but is a pattern found in many organizations. As Jérémie Jakubowicz explains: “Even when it’s green light ahead on our side, that doesn’t mean that the work’s finished. Using the results and putting it into production are also problems that have to be solved on the company’s side.” A limitation that illustrates the fact that using big data technologies is not just a scientific issue but an organizational one as well.

At La Rotonde, the scientific mediation is based on experiments

A venue for exhibitions, mediation, and more generally scientific, technical and industrial culture, La Rotonde is a Mines Saint-Étienne center with a difference. Its role is to share knowledge with different audiences, young and old, who are fans of science or simply curious. For its director, Guillaume Desbrosse, this involves first and foremost encouraging an interest in science, and allowing each individual to apply the investigatory process. For this purpose, La Rotonde bases all its mediations on experiments.

 

I never should have come here“. These are words that no cultural center mediator wants to hear from the public. Guillaume Desbrosse, director of the La Rotonde Center for Scientific, Technical and Industrial Culture (CCSTI), in Saint-Étienne, aims to inspire the opposite reaction. “We want visitors to feel included, and to realize that they have an important role at La Rotonde, regardless of the level of their scientific expertise” he advises.

Therefore, in order to be as inclusive as possible, the CCSTI focuses on experiments. So, out with traditional signs and their captions and in with a more hands-on approach. At La Rotonde, no exhibition is set up without experiments for the public to carry out, or without mediators to guide the public in understanding the results obtained from any interaction with scientific tools. Besides more direct contact with science, experiments also make it possible to instate a scientific approach and develop critical thinking. “We place the public in the same position as a researcher in a laboratory” summarizes Guillaume Desbrosse.

The hands-on approach is recognized as an asset at La Rotonde. “It is part of our identity, and appeals to the public” he confirms. Perceiving science as something to be enjoyed is an essential component of the vision of the Saint-Etienne CCSTI. Therefore, discovery is a very strong theme in the activities on offer to the various audiences. Moreover, Guillaume Desbrosse insists that “curiosity never killed the cat, quite the opposite!”

 

La Rotonde, a laboratory of ideas and innovation

The team of nine at La Rotonde is not afraid of taking risks. In 2012, the center devised the “Mondo Minot” exhibition for very young children, returning for a second time between February and November 2016. Open from two years of age, this exhibition is a real gamble. “The cultural activities on offer for preschoolers are scarce enough, but in terms of scientific culture, you could even say it’s a wasteland!” points out Guillaume Desbrosse. He goes on to say: “Nobody opens an exhibition from that age. The minimum age for admission is generally three years, but we have worked on offering inclusion from two years of age.

In the case of this exhibition, particular thought has gone into the surroundings. The team called on the services of scenography designers to devise an elaborate graphic and immersive environment. A yurt has been set up, and the children can pass from one module to another through a somewhat unusual closet. The narrative and the experiments are constructed based on the five senses, offering a fun and educational introduction to science, suitable for such a young audience.

Therefore, La Rotonde is not hesitant about innovating and developing novel mediation methods. In this regard, it fully warrants its status as the center for scientific culture of Mines Saint-Étienne, the school which is also host to the La Rotonde exhibition area. This proximity with the world of research is “a real asset” according to the director of the CCSTI.

 

The team at La Rotonde bases its mediations on observation and a hands-on approach, which engages even the youngest audience.

 

Bringing the public and researchers closer together

Devising experiment-based scientific popularization programs with mediator guidance is no mean feat. Each practical experiment, each module is developed in close cooperation with researchers. “We are experts in popularization, but not experts in science” Guillaume Desbrosse admits humbly. Scientists are even requested to talk to the public about their specialty. “We want to create a link and interaction between science and society, our job consists of devising cultural mediation models and creating the conditions for this encounter” he continues.

Therefore, the team at La Rotonde prioritizes direct contact between researchers and the public, with in-depth consideration on how they can interact. For, behind all this, the aim is also to break the many stereotypes still used to depict scientists. “It is a long-term undertaking, because there are a lot of preconceived notions out there. In the collective unconscious, a researcher is male, generally older, reserved and has little interaction with the outside world” says the director of La Rotonde regretfully.

 

Restoring the image of science

These misconceptions can be combated by bringing female researchers or young PhD students, for example, to the La Rotonde center, but also involving them off-site programs conducted by the CCSTI for schools. The team thus conducted an experiment. Before a researcher came to talk to students in schools, they asked the children to draw how they imagined a researcher to be. Many had the stereotypical view described above. The students then produced another drawing after the scientist’s visit, for a more realistic result. “Meeting a male or female researcher shatters the myth, and offers an opportunity to broaden the scope of possibilities particularly for girls who find it difficult to see themselves in scientific professions” observes Guillaume Desbrosse.

La Rotonde and its team have set their hearts on building or rebuilding an awareness of research and those involved. Guillaume Desbrosse hopes above all to bridge the gap between science and society: “There is a resistance to science, and innovation. My goal is to develop a cultural habit in all audiences, and encourage interest in science.” Behind this aim lies a wish to build a society based on rational thought. This objective can only be achieved through collective effort, in which La Rotonde very much hopes to play its part.

 

Guillaume Desbrosse, directeur de La Rotonde.Guillaume Desbrosse, mediating between science and the public

With an interest in science from a very young age, Guillaume Desbrosse started his university studies in Poitiers to become a teacher. At that time, he discovered a passion for sharing knowledge and obtained a vocational degree in scientific mediation in Tours. This profession provided him with the contact with the public and science that he was seeking.

He joined La Rotonde in Saint-Étienne as a project manager in 2012. Guillaume Desbrosse subsequently developed his expertise in the field of popularization further with a Master’s degree in scientific communication completed in Grenoble. In 2015, he became director of La Rotonde, with the aim of continuing to innovate to promote the cultural mediation of science.

 

[divider style=”solid” top=”20″ bottom=”20″]

Le+bleuLa Rotonde, a CCSTI with an active role in society and the region

The movement of Centers for Scientific, Technical and Industrial Culture (CCSTI) started in 1979 in Grenoble with La Casemate. It was followed by the Cité des Sciences in Paris in 1986. More CCSTIs subsequently emerged, including La Rotonde in 1999. This center is the only one to be incorporated in a school of engineering: Mines Saint-Étienne. It offers engineering students a glimpse of the promotion of scientific knowledge and sharing with society.

La Rotonde, like any CCSTI, seeks to play an active role in social and economic development by offering citizens the tools to understand major scientific issues of our times. Its local roots allow it to extend its influence particularly throughout the scientific culture network of its region. La Rotonde heads the network in the French department of the Loire for organizing the “Fête de la Science” science festival, coordinating all the activities in the department associated with this event. In addition to its exhibition area within Mines Saint-Étienne, La Rotonde organizes a large number of off-site activities, for schools, cultural centers, and associations, and receives 40,000 annual visitors.

[divider style=”solid” top=”20″ bottom=”20″]

Recherche partenariale, Carnot TSN, Carnot M.I.N.E.S., Carnot 3

M.I.N.E.S and Télécom & Société numérique Carnot institutes have again been awarded the Carnot label

Belles histoires, bouton, CarnotOn July 6th, the Secretary of State for Higher Education and Research, Thierry Mandon, announced the recipients of the Carnot 3 Label. M.I.N.E.S Carnot institute and Télécom & Société numérique Carnot institute were among the winners: having both held the label since 2006, they again earned concrete recognition of the quality of their partnership-based research.

 

 

I am strongly committed to the Carnot label,” Thierry Mandon reminded the audience at the opening of the annual 17/20 meeting of the Carnot network, before announcing the list of the 29 Carnot institutes that received the label and 9 Carnot Springboards (new in 2016) as part of the Carnot 3 call for proposals. Since 2006, this accreditation has sought to encourage partnerships between public research labs and companies, in order to develop technology transfer and innovation.

After getting very good assessments from the National Agency for Research (ANR) for the Carnot 2 period (2012/2015), the two Carnot institutes of Institut Mines-Télécom received confirmation of their quality label for partnership-based research. In practical terms, this Carnot 3 will result in a financial contribution awarded over a period of several years, aimed at supporting the professionalization of corporate relations departments, the internationalization of partnerships, and upstream research.

 

The 2 Institut Mines-Télécom’s Carnots :

Find out more about M.I.N.E.S. Carnot institute, composed of six Mines schools, as well as some teams from Ecole Polytechnique and ENSTA Paris Tech, in partnership with the contract research organization Armines.

Find out more about Télécom & Société Numérique Carnot institute, which encompasses Télécom ParisTech, Télécom Bretagne, Télécom SudParis, Télécom Ecole de Management, Eurecom, Télécom Saint-Etienne, Télécom Physique Strasbourg, two Ecole Polytechnique labs and Strate Design.

 

PLEASE NOTE!

The next Carnot meeting will be the corporate meetings on October 5 and 6, 2016 in Lyon, with the objective of presenting the partnership offering in the area of R&D between the institutes and companies, from SME-SMI to large corporate groups.

[box type=”shadow” align=”” class=”” width=””]

The Carnot label

Carnot 3, Carnot M.I.N.E.S., Carnot TSN

The Carnot label, created in 2006, is aimed at developing partnership-based research, conducting research studies led by public laboratories in partnership with socioeconomic stakeholders, primarily companies (from SMEs to large corporate groups), in order to address their needs.

The Carnot label is awarded to public research structures, Carnot institutes, which simultaneously carry out upstream research activities, enabling the renewal of their scientific and technological skills, while also committing to a proactive policy in the area of partnership-based research that benefits the socioeconomic worlds. The Ministry for Research awards the label to Carnot institutes following a very selective call for applications.[/box]