social media and crisis situation, riot

Social media and crisis situation: learning how to control the situation

Attacks, riots, natural disasters… all these crises spark wide-spread action among citizens on online social media. From surges of solidarity to improvised manhunts, the best and the worst can unfurl. Hence the need to know how to manage and coordinate these movements so that they can contribute positively to crisis management. Caroline Rizza, a researcher in information and communication sciences at Télécom ParisTech, studies these unique situations and the social, legal and ethical issues they present.

 

Vancouver, 2011. The defeat of the local hockey team in the Stanley Cup final provoked chaos in the city center. Hooligans and looters were filmed and photographed in the act. The day after the riots, the Vancouver police department asked citizens to send in files in order to identify the hooligans. Things soared out of control. The police were inundated with an astronomical quantity of unclear images, in which it was hard to distinguish hooligans from those seeking to contain the rioters. Citizens, feeling warranted to do so in response to the call to witness by the authorities, were quick to create Facebook pages to disclose the images and allow the identification of supposed criminals by internet users. Photographs, names and addresses spread across the internet. A veritable manhunt was organized. The families of accused minors were forced to flee their homes.

réseaux sociaux crise, social media, crisis, riots

During the riots in Vancouver, citizens posted images of rioters on Facebook
and organized a veritable manhunt…

The case of the riots in Vancouver is an extreme example, but on the other side of the Atlantic, other cases of civilian mobilization have also raised new issues. In an act of solidarity during the attacks in France on 13 November 2015, Parisian residents invited citizens in danger to take refuge in their homes and shared their addresses on social media. They could have become targets for potential attacks themselves. “These practices show how civilian solidarity is manifested on social media during this kind of event, and how this solidarity can impact professional practices of crisis management and create additional difficulties” comments Caroline Rizza, a researcher in information and communication sciences at Télécom ParisTech. The MACIV project (Management of CItizens and Volunteers: social media in crisis situation), launched last February and financed by the ANR via the French General Secretariat for Defense and National Security, examines these issues. Its partners include Télécom ParisTech, IMT Mines Albi-Carmaux, LATTS, the French Directorate General for Civil Security and Crisis Management, VISOV, NDC, and the Defense and Security Zone of the Paris Prefecture of Police. At Télécom ParisTech, Caroline Rizza and her team are working on the social, ethical and legal issues posed by these unique situations.

Coordinating surges of civilian solidarity for better crisis management

We are going to work on case studies of initiatives through crises which occur regularly or are likely to reoccur, such as train strikes or periods of snow, in order to analyze the chronological unfurling of events and the way in which online social media is mobilized,” explains the researcher. “We are also interested in the way in which information is published and spreads on online social media during serious incidents. We are going to conduct interviews with institutional operators to understand how they have been impacted internally, as well as with contributors to Wikipedia pages.” Given that the development of these pages is participatory and open, does the widely-trusted information they contain come from institutions or citizens?

For the researcher, the aim is to see the emergence of different types of organization that can later be used in crisis management. Based on the recommendations drawn from this study, the project’s objective is to integrate a management module for citizen initiatives on a platform for mediation and coordination in the event of a crisis, created by IMT Mines Albi-Carmaux, and whose development was launched in the framework of other projects such as the GéNéPI project by the ANR.

Citizens are often mass-mobilized in a rush of solidarity during crises. Despite good intentions, the number of people taking action is often so great that they interfere with good crisis management and the phase in which the authorities take over” comments Caroline Rizza. For example, after a flood, surges of solidarity to clean the streets can in fact hinder the authorities in charge. Similarly, after the attacks in Paris on 13 November 2015, blood donor organizations started to refuse donations because they had become too numerous in too short a space of time, whereas the need for blood donations is a long term one. Hence the need to properly manage and coordinate civilian mobilization.

Data protection, fake news and unclear legal situations

Besides the simple question of coordination, the management of mobilization among civilians and volunteers raises complex ethical and legal issues. In extreme situations, citizens are more likely to expose themselves in order to help or find an immediate solution, as shown by the “parisportesouvertes” hashtag on Twitter during the attacks of 13 November. Moreover, when the Eyjafjallajökull volcano in the south of Iceland erupted in 2010, citizens and stranded tourists shared data to organize, find alternative travel solutions or come in aid of people blocked by the eruption. The result: this data was recovered by private companies. “In the context of the transition to the GDPR, it is essential to examine the question of personal data and be able to guarantee citizens that their data will not be employed by third parties or used out of context” says Caroline Rizza.

 

réseaux sociaux crise

During the attacks in Paris on 13 November 2015, some users shared their address on social media, notably through #parisportesouvertes.

But personal data is not the only thing leaked on online social media. During the attack on 14 July 2016 in Nice, false information maintained that hostages had been taken in a restaurant, when people were in fact simply taking shelter. “These rumors are not necessarily spread with bad intentions,” explains the researcher, “but it is absolutely necessary to integrate and analyze the information in order to quench rumors because they can cause even more panic.” Caroline Rizza’s hypothesis is therefore that the integration of social media in crisis management, in terms of institutional communication or transmission of information from the field, will allow rumors about an event to be quickly contained.

Lastly, what about prevention and the preparation of citizens for disasters? “Although these two phases are often left to one side by studies, there are very real challenges in ensuring that citizens participate themselves in prevention through online social media” explains Caroline Rizza. Nevertheless, the researcher stresses that the authorities must maintain an ethic of shared responsibility. They must never offload onto citizens. But what can the authorities ask citizens to do? To publish information? Help others? The legal vacuum remains…

Whatever answers the MACIV project provides to these problems, the integration of online social media in crisis management must be carefully studied. Although we cannot circumvent traditional channels of information, since not all citizens necessarily have a Twitter or Facebook account, it is no longer possible to manage crises without including the latter in the strategies. As Caroline Rizza says: “whatever happens citizens will use them in a crisis, with or without us!

FDM

Could additive manufacturing be the future of the plastics industry?

There has been a significant revolution in the world of materials as 3D printing has proved its potential for innovation. Now it must be put into practice. While 3D printers are increasingly present in prototyping workshops, they have been slow to replace the processes traditionally used in the plastics industry. This is because these technologies are too restrictive, in terms of both available materials and performance, which is still low for parts having specific uses.

Article written by Jérémie Soulestin, researcher at IMT Lille Douai.

[divider style=”normal” top=”20″ bottom=”20″]

 

[dropcap]P[/dropcap]olymers now represent the vast majority of materials used in 3D printing for all purposes. But the use of additive manufacturing for industrial needs has remained primarily confined to metals. For industry professionals, the use of polymers is limited to prototyping. This reluctance to combine 3D printing and plastic materials on an industrial scale can be explained by the low diversity of polymer materials compatible with this process. The most widely-used technologies, such as stereolithography, rely on thermosetting resins which polymerize during the process. Yet these resins represent only a small share of the polymer materials used by the plastics industry. Fused deposition modeling (FDM) is slightly better-suited to industrial needs. It is based on thermoplastic polymers, which soften when heated during the process, then harden when they return to room temperature. This technical difference makes it possible to make use of materials which are more commonly used in the plastic industry and are therefore better aligned with market demand.

FDM technology was invented in 1989 by Scott Crump, founder of Stratasys, which is now one of the leading manufacturers of 3D printers. It is a very straightforward process. A filament made from thermoplastic polymer is fed into the machine. It is pushed through a heated nozzle to produce a malleable string measuring a few micrometers in diameter. The 3D part is obtained by continuously depositing this string, layer by layer, by moving the nozzle or the printer table, or both in all directions. The simplicity of the FDM process, coupled with the expiry of Stratasys’s patent for the technology, has led to exponential growth in 3D desktop printers. These small-scale machines are mainly intended for the general public and the maker community but have also made their way into design offices and corporate fablabs. Prices range from €220 for 3D printers for beginners who want to discover additive manufacturing to more than €2,000 for 3D desktop printers designed to create prototypes.

FDM technology: turning hope into reality

Although the FDM process is probably the best-suited solution for producing short-run plastic parts, it is not able to miraculously meet the needs of the plastics industry. Even if professional machines that are much more expensive than 3D desktop printers were to be used, there are drawbacks with the thermoplastic polymers available which limit the potential for FDM applications. The philosophy of FDM machine manufacturers – especially manufacturers of professional machines – is still based on the use of proprietary materials. In other words: for a certain brand of printer, only the polymers sold by that brand or its partners are compatible. The process is configured to ensure the quality of parts made using proprietary materials. Users have little control in terms of changing the settings or using other materials.  Even though some brands offer a wide range of materials with different characteristics (hard, soft, translucent, chemically-resistant, biocompatible etc.), they are still limited to a few thermoplastic polymers. The proliferation of suppliers of filaments for 3D printers has not made up for gaps in the catalogue of usable materials. For specific industrial applications – especially high-performance parts like automobile or aircraft components – but even for less technical applications, manufacturers have not yet been won over by this technology. The reason that thousands of different kinds of plastics exist is undoubtedly linked to the fact that each application requires specific properties.

On top of that, there are inherent problems with the FDM process. In general, parts made using FDM are more porous and rougher on the surface than those made with conventional processes like extrusion or injection. This is due to the fact that strings are deposited layer by layer. Because they have a cylindrical shape, a space is created between two strings placed next to one another. As a result, the surface of the part is not smooth and some cavities are located inside. This porosity can be controlled by applying high pressure on the material as the string is being deposited so that the cylinders are compressed and less space is left. But that still may not be enough to meet specifications for high-performance parts. Moreover, it does not solve the problem of surface roughness, since no layer can be compressed on the top layer. The only way to reduce this roughness would be to decrease the diameter of the cylinders and therefore the outlet of the filament nozzle, which would reduce the flow rate of material – and consequently increase production time.

Other promising technologies

These drawbacks of FDM could certainly be overcome by using machines based on polymer pellets instead of filaments. Manufacturers are taking steps in this direction in order to theoretically make it possible to use pellets of any thermoplastic polymer. The Freeformer technology marketed by Arburg, a German injection-moulding specialist, does just that. It is based on two injection units that can melt the polymer pellets, which makes it possible to create parts which require both hard and soft materials or use soluble materials. The molten polymer is deposited in the form of droplets to build the part, rather than in cylindrical threads. In this way, the process is somewhat different from FDM, but it is essentially based on the same principle, and is better-suited to meet the needs of the plastics industry. But research and development work must still be carried out to better understand the possibilities offered by this new process and tap into its full potential.

Future advances in 3D printing technology will depend on the use of additive manufacturing for mass production. For now, this technology is not suited to most applications that require high mechanical performances, but that could change. The aeronautics industry is a good example of the challenges that lie ahead for additive manufacturing. Although aircraft parts are still produced with conventional materials and processes, manufacturers are increasingly using composite materials. These composites are produced with techniques that resemble 3D printing. Combining them with technologies similar to FDM on robotic arms would be one way to greatly improve the performance of parts obtained through additive manufacturing. If such hybrid processes proved to be effective, the performance of 3D printed parts could improve significantly. It would then be possible to use additive manufacturing to produce structural aircraft components, instead of just their prototypes.

healthcare

AI in healthcare for the benefit of individuals and society?

Article written by Christian Roux (Director of Research and Innovation at IMT), Patrick Duvaut (Director of Innovation at IMT), and Eric Vibert (professor at Université Paris-Sud/Université Paris Saclay, and surgeon at Hôpital Paul Brousse (AP-HP) in Villejuif).

[divider style=”normal” top=”20″ bottom=”20″]

How can artificial intelligence be built in such a way that it is humanistic, explainable and ethical? This question is central to discussions about the future of AI technology in healthcare. This technology must provide humans with additional skills without replacing healthcare professionals. Focusing on the concept of “sustainable digital technology,” this article presents the key ideas explained by the authors of a long format report published in French in the Télécom Alumni journal. 

 

Artificial intelligence (AI) is still in its early stages. Performing single tasks and requiring huge numbers of examples, it lacks a conscience, empathy and common sense. In the medical field, the relationship between patient and augmented caregiver, or virtual medical assistant, is a key issue. In AI, what is needed is a personalized, social learning behavior between machines and patients. There is also a technological and scientific limitation: the lack of explainability in AI. Methods such as deep learning act like black boxes. A single verdict resulting from a process that can be described as “input big data, output results” is not sufficient for practitioners or patients. For the former it is an issue of taking responsibility for supporting, understanding and controlling the patient treatment pathway, while for the latter, it is one of adhering to a treatment and therapy approach, crucial aspects of patients’ involvement in their own care.

However, the greatest challenges facing AI are not technological, but are connected to its use, governance, ethics, etc. Going from “cognitive,” and “social” technologies to decision-making technologies is a major step. It means completely handing over responsibility to digital technology. The methodological bases required for such a transition exist, but putting them to use would have a significant impact on the nature of the coevolution of humans and AI. As far as doctors are concerned, the January 2018 report by the French Medical Council (CNOM) is quite clear: practitioners wish to keep control over decision-making. “Machines must serve man, not control him,” states the report. Machines must be restricted to augmenting decision-making or diagnoses, like IBM Watson. As philosopher and essayist Miguel Benasayag astutely points out, “AI does not ask questions, Man has to ask them.”

Leaving humans in charge of decision-making augmented by AI has become an even more crucial issue in recent years. Since 2016, society has been facing its most serious crisis of confidence in social media and digital service platforms since the dawn of the digital age, with the creation of the World Wide Web in 1990. After nearly 30 years of existence, digital society is going through a paradigm shift. Facing pressure from citizens driven by a crisis of confidence, it is arming itself with new tools, such as the European Data Protection Regulation (GDPR). The age of alienating digital technology, which was free to act as a cognitive, social and political colonizer for three decades, is being replaced by “sustainable digital technology” that puts citizens at the center of the cyber-sphere by giving them control over their data and their digital lives. In short, it is a matter of citizen empowerment.

In healthcare this trend translates into “patient empowerment.” The report by the French Medical Council and the health report for the General Council for the Economy both advocate a new healthcare model in the form of a “health democracy” based on the “6P health”: Preventative, Predictive, Personalized, Participatory, Precision-based, Patient-focused.

“Humanistic AI” for the new treatment pathway

The following extract from the January 2018 CNOM report asserts that healthcare improvements provided by AI must focus on the patient, ethics and building a trust-based relationship between patient and caregiver: “Keeping secrets is the very basis of people’s trust in doctors. This ethical requirement must be incorporated into big data processing when building algorithms.” This is also expressed in the recommendations set forth in Cédric Villani’s report on Artificial Intelligence. Specifically, humanistic artificial intelligence for healthcare and society would include three components to ensure “responsible improvements”: responsibility, measurability and native ethics.

The complexity of medical devices and processes, the large number of parties involved, and the need to instantly access huge quantities of highly secure data require the use of AI as a Trusted Service platforms (AIaaTS), which natively integrate all the virtues of “sustainable digital technology.” AIaaTS is based on a data vault that incorporates the three key aspects of digital cognitive technologies, perception, reasoning and action, and is not limited to only deep or machine learning.

Guarantees of trust, ethics, measurability and responsibility would rely on characteristics that use tools of the new digital society. Native compliance with GDPR, coupled with strong user authentication systems would make it possible to ensure patient safety. The blockchain, with its smart contracts, can also play a role by allowing for enhanced notarization of administrative management and medical procedures. Indicators for ethics and explainability already exist to avoid the black-box effect as much as possible. Other indicators, similar to the Value Based Healthcare model developed by Harvard University Professor Michael Porter, measure the results of AI in healthcare and its added value for the patient.

All these open-ended questions allow us to reflect on the future of AI. The promised cognitive and functional improvements must be truly responsible improvements and must not be made at the expense of alienating individuals and society. If AI can be described as a machine for improvements, humanistic AI is a machine for responsible improvements.

 

RAMSES

RAMSES: Time keeper for embedded systems

I’MTech is dedicating a series of articles to success stories from research partnerships supported by the Télécom & Société Numérique Carnot Institute (TSN), to which Télécom ParisTech belongs.

[divider style=”normal” top=”20″ bottom=”20″]

Belles histoires, Bouton, CarnotEmbedded computing systems are sometimes responsible for performing “critical” functions. In the transport industry, they sometimes prevent collisions between two vehicles. To help design these important systems, Étienne Borde, a researcher at Télécom ParisTech specialized in embedded systems, developed RAMSES. This platform gives developers the tools they need to streamline the design process for these systems. Its potential for various applications in the industrial and transport sectors and robotics has been recognized by Télécom & Société Numérique Carnot Institute, which has made it a part of its technological platform.

 

What is the purpose of the RAMSES platform?

Étienne Borde: RAMSES is a platform that helps design critical real-time embedded systems. This technical term refers to embedded systems that have a significant time component: if a computer operation takes longer than planned, a critical system failure could occur. In terms of the software, time is managed by a real-time operating system. RAMSES automates the configuration of this system while ensuring the system’s time requirements are met.

What sectors could this type of system configuration support be used for?

EB: The transport sector is a prime candidate. We also have a case study for the railway sector that shows what the platform could contribute in this field. RAMSES is used to estimate the worst data transmission time for a train’s control system. The most critical messages transmitted ensure that the train does not collide with another train. For safety reasons, the calculations are carried out using three computing units at the back of the train and three computing units at the front of the train. What RAMSES offers is a better control of latency and better management of the flow of transmission operations.

How does RAMSES help improve the configuration of critical real-time embedded systems?

EB: RAMSES is a compiler of the AADL language. This language is used to describe computer architectures. The basic principle of AADL is to define categories of software or hardware components that correspond to physical objects used in the everyday life of computer scientists or electronic engineers. An example of one of these categories is that of processors: AADL can describe the computer’s calculation unit by its parameters and frequency.  RAMSES helps assemble these different categories to represent the system with different levels of abstraction. This explains how the platform got its name: Refinement of AADL Models for Synthesis of Embedded Systems.

How does a compiler like RAMSES benefit professionals?

EB: Professionals currently develop their systems manually using the programming language of their choice, or generate this code using a model. They can assess the data transmission time on the final product, but with poor traceability in relation to the initial model. If a command takes longer than expected, it is difficult for the developers to isolate the step causing the problem. RAMSES generates intermediate representations as it progresses, analyzing the time associated with each task to ensure no significant deviations occur. As soon as an accumulation of mechanisms present a major divergence in relation to the set time constraints, RAMSES alerts the professional. The platform can indicate which steps are causing the problem and help correct the AADL code.

Does this mean RAMSES is primarily a decision-support tool?

EB: Decision support is one part of what we do. Designing critical real-time embedded systems is a very complex task. Developers do not have all the information about the system’s behavior in advance. RAMSES does not eliminate all of the uncertainties, but it does reduce them. The tool makes it possible to reflect on the uncertainties to decide on possible solutions. The alternative to this type of tool is to make decisions without enough analysis. But RAMSES is not used for decision support only. The platform can also be used to improve systems’ resilience, for example.

How can optimizing the configuration impact the system’s resilience?

EB: Recent work by the community of researchers in systems security looks at mixed criticality. The goal is to use multi-core architectures to deploy critical and less critical functions on the same computing unit. If the time constraints for the critical functions are exceeded, the non-critical functions are degraded. The computing resources made available by this process are then used for critical functions, thereby ensuring their resilience.

Is this a subject you are working on?

EB: Our team has conducted work to ensure that critical tasks will always have enough resources, come what may. At the same time, we are working on the minimum availability of resources for less critical functions. This ensures that the non-critical functions are not degraded too often. For example, for a train, this ensures that the train does not stop unexpectedly due to a reduction in computational resources. In this type of context, RAMSES assesses the availability of the functions based on their degree of criticality, while ensuring enough resources are available for the most critical functions.

Which industrial sectors could benefit from the solutions RAMSES offers?

EB: The main area of application is the transport sector, which frequently uses critical real-time embedded systems. We have partnerships with Thales, Dassault and SAFRAN in avionics, Alstom for the railway sector and Renault for the automotive sector. The field of robotics could be another significant area of application. The systems in this sector have a critical aspect, especially in the context of large machines that could present a hazard to those nearby, should a failure occur. This sector could offer good use cases.

 

[divider style=”normal” top=”20″ bottom=”20″]

A guarantee of excellence in partnership-based research since 2006

The Télécom & Société Numérique Carnot Institute (TSN) has been partnering with companies since 2006 to research developments in digital innovations. With over 1,700 researchers and 50 technology platforms, it offers cutting-edge research aimed at meeting the complex technological challenges posed by digital, energy and industrial transitions currently underway in in the French manufacturing industry. It focuses on the following topics: industry of the future, connected objects and networks, sustainable cities, transport, health and safety.

The institute encompasses Télécom ParisTech, IMT Atlantique, Télécom SudParis, Institut Mines-Télécom Business School, Eurecom, Télécom Physique Strasbourg and Télécom Saint-Étienne, École Polytechnique (Lix and CMAP laboratories), Strate École de Design and Femto Engineering.

[divider style=”normal” top=”20″ bottom=”20″]

The contest for the worst air pollutant

Laurent Alleman, IMT Lille Douai – Institut Mines-Telecom

[divider style=”normal” top=”20″ bottom=”20″]

[dropcap]I[/dropcap]n its report published on June 28, 2018, the French Agency for Health Safety (ANSES) presented a list of 13 new priority air pollutants to monitor.

Several air pollutants that are harmful to human health are already regulated and closely monitored at the European level (in accordance with the guidelines from 2004 and 2008): NO2, NO, SO2, PM10, PM2,5, CO, benzene, ozone, benzo(a)pyrene, lead, arsenic, cadmium, nickel, gaseous mercury, benzo(a)anthracene, benzo(b)fluoranthene, benzo(j)fluoranthene, benzo(k)fluoranthene, indeno(1,2,3,c,d)pyrene and dibenzo(a,h)anthracene.

While some pollutants like ozone and PM10 and PM2.5 particles are famous and often cited in the media, others remain much less known. It should also be noted that this list is still limited, considering the significant number of substances emitted into the atmosphere.

So, how were these 13 new pollutants identified by ANSES? What were the criteria? Let’s take a closer look.

The selection of candidates

Identifying new priority substances to monitor in the ambient air is a long but exciting process. It’s a little like choosing the right candidate in a beauty contest! First, independent judges and experts in the field must be chosen. Next, the rules must be determined for selecting the best candidates from among the competition.

Over the past two years, the working group of experts developed a specific method for considering the physical and chemical diversity of the candidates present in ambient air.

To gather all the participants at this “beauty contest”, the experts first created a core list of chemical pollutants of interest that were not yet regulated. The experts did not include certain candidates, such as pesticides, pollen and mold, greenhouse gases and radioelements, because they were being assessed in other studies or were outside their scope of expertise.

This core list is based on information provided by Certified Associations of Air Quality Monitoring (AASQA) and French research laboratories like the Laboratoire des Sciences du Climat et de l’Environnement (LSCE) and the Laboratoire Interuniversitaire des Systèmes Atmosphériques (LISA). It is also informed by consultation with experts from national and international organizations like the European Environment Agency (EEA) and from Canada and the United States (US-EPA), as well as by inventories established by international organizations like WHO.

Finally, this list was supplemented by an in-depth study of recent international and national scientific publications on what are considered “emerging” pollutants.

This final list included 557 candidates! Just imagine the stampede!

Ranking the finalists

The candidates are then divided into four categories, based on the data available on atmospheric measurements and their intrinsic danger.

Category 1 includes substances that present potential health risks. Then there are categories 2a and 2b for candidates on which more data must be acquired from air measurements and studies on health impacts. Non-priority substances–with concentrations in the ambient air and health effects that do not reveal any health risks–are placed in category 3.

Certain exceptional candidates were reclassified, such as ultrafine particles (with diameters of less than 0,1 µm) and carbon soot, due to their potential health impacts on the population.

Finally, the experts prioritized the identified pollutants in category 1 to select the indisputable winner of this unusual beauty contest.

And the winner is…

Gas 1,3-Butadiene ranked number one out of the 13 new air pollutants to monitor, according to ANSES. It is followed by ultrafine particles and carbon soot, for which increased monitoring is recommended.

1,3-Butadiene is a toxic gas that originates from several combustion sources including exhaust-pipe emissions from motor vehicles, heating, and industrial activities (plastic and rubber). Several temporary measurement campaigns in France revealed that the pollutant frequently exceeded its toxicological reference value (TRV)–a value that establishes a relationship between a dose and the effect.

Its top spot on the podium comes as no surprise: it had already won a trophy in the United Kingdom and Hungary, two countries that have reference values for its concentration in the air. In addition, the International Agency for Research on Cancer (IARC) classified 1,3-butadiene as a known carcinogen for humans as early as 2012.

As for the ten other pollutants on the ANSES list, increased monitoring is recommended. These ten pollutants, with exceedances in TRV observed in specific (especially industrial) contexts are, in decreasing order of risk, manganese, hydrogen sulfide, acrylonitrile, 1,1,2-trichloroethane, copper, trichloroethylene, vanadium, cobalt, antimony and naphthalene.

This selection is a first step towards 1,3-butadiene being added to a list of substances that are currently regulated in France. If the French government forwards this proposal to the European Commission, by the end of 2019 it could be included in the ongoing revision of the 2008 directive on monitoring air quality.

Since this classification method is adaptive, there is a good chance that new competitions will be organized in the coming years to identify other candidates.

Laurent Alleman, Associate Professor, IMT Lille Douai – Institut Mines-Télécom

The original version of this article was published on The Conversation.

 

mobility

20 terms for understanding mobility

Increasingly linked to digital technology, mobility is becoming more complex and diversified. Autonomous driving, multimodality, rebound effect and agile method are all terms used in the study of new forms of mobility. The Fondation Mines-Télécom has put together a glossary of 20 terms to help readers understand the issues involved in this transformation and clarify the ideas and concepts explained in its 10th booklet.

 

Active mobility A form of transport that uses only the physical activity of the human being for locomotion. Cycling, walking, roller skating, skateboarding etc.

Agile development Use of agile methods to create projects based on iterative, incremental and adaptive development cycles.

API  Application programming interface. A set of methods and tools through which a software program provides services to other software programs.

Ergonomics Scientific study of the relationship between human beings and their work tools, methods and environment. Its aim is to develop systems that provide maximum comfort, safety and efficiency.

Explanation interviews Interviews aimed at establishing as detailed a description as possible about a past activity.

Flooding Availability of more resources than necessary.

Free floating Fleet of self-service vehicles available for use without a home station.

Gentrification Urban phenomenon whereby wealthier individuals appropriate a space initially occupied by less privileged residents.

Intermodality Use of several modes of transport during a single journey. Not to be confused with multimodality.

IoT Internet of Things

L4, L5 High levels of autonomous driving. L4: the driver provides a destination or instructions and may not be in the vehicle. L5: Fully autonomous driving in all circumstances, without help from the driver.

MaaS Mobility as a service. This concept was formalized by professionals at the 2014 ITS European Congress in Helsinki and through the launch of the MaaS Alliance.

Modality Used to describe a specific mode of transport characterized by the vehicles and infrastructures used.

Multimodality Presence of several modes of transport between different locations. Not to be confused with intermodality.

PDIE Inter-company travel plan (French acronym for Plan de Déplacement Inter-Entreprise). Helps make individual company travel plans (PDE) more effective by grouping together several companies and pooling their needs.

Rebound effect Economic term explaining the rise in consumption that occurs when the limits of using a technology are reduced.

Soft mobility Sometimes used as an exact synonym for non-motorized (and thereby active) forms of mobility. The term “soft” refers to environmental sustainability relating to eco-mobility: reducing noise, limiting pollution etc. It sometimes also includes motorized or assisted forms of transport based on technologies that do not rely on oil.

Solo car use Term used to describe a driver alone in his/her car.

Transfer An often expensive step during which merchandise or passengers are transferred from one vehicle to another. As short a transshipment as possible is desirable.

TCSP – Acronym used for exclusive right-of-way transit in France.

 

silicon

Silicon and laser: a brilliant pairing!

Researchers at Télécom ParisTech, in partnership with the University of California, Santa Barbara (UCSB), have developed new optical sources. These thermally-stable, energy-saving lasers offer promising potential for silicon photonics. These new developments offer numerous opportunities for improving very high-speed transmission systems, like datacomms and supercomputers. Their results were published in Applied Physics Letters last summer, a journal edited by AIP Publishing.

 

Silicon photonics is a field that is aiming to revolutionize the microelectronics industry and communication technologies. It combines two of the most significant inventions: the silicon integrated circuit and the semiconductor laser. Integrating laser functions into silicon circuits opens up a great many perspectives, allowing data to be transferred quickly over long distances compared to conventional electronic solutions, while benefiting from silicon’s large-scale production efficiency. But there’s a problem: silicon is not a good at emitting light. The laser emissions are therefore achieved using materials from column III of the famous periodic table and one element from column V, specifically, boron or gallium with arsenic or antimony.

Researchers from Télécom ParisTech and the University of California, Santa Barbara (UCSB) have recently presented a new technology for preparing these III-V components by growing them directly on silicon. This technological breakthrough enables researchers to obtain components with remarkable properties in terms of power output, power supply and thermal robustness. The results show that these sources have more stability in the presence of interfering reflections—a critical aspect in producing low-cost communication systems without an optical isolator. Industrial giants such as Nokia Bell Labs, Cisco, Apple and major digital stakeholders like Google and Facebook have high hopes for this technology. It would allow them to develop the next generation of extremely high-speed optical systems.

The approach currently used in the industry is based on thermally adhering a semiconductor laser (developed with a III-V material) to a structured silicon substrate to direct the light. Thermal adhesion does not optimize costs and is not easily replaced, since silicon and III-V elements are not naturally compatible. However, this new technology will pave the way for developing laser sources directly on silicon, a feat that is much more difficult to achieve than for other components (modulator, guides, etc.) For several years now, silicon has become an essential component in microelectronics. And these new optical sources on silicon will help the industry adapt its manufacturing processes without changing them and still respond to the current challenges: produce higher-speed systems that are cost-effective, compact and offer energy savings.

This technological breakthrough is the result of collaboration between Frédéric Grillot, a researcher at Télécom ParisTech, and John Bowers, a researcher at the UCSB. The work of Professor Bowers’ team contributed to developing technology that produced the first “hybrid III-V on silicon” laser with Intel in 2006. In 2007, it won the “ACE Award” (Annual Creativity in Electronics) for the most promising technology. The collaboration between John Bowers and Frédéric Grillot and his team is one of the few that exist outside the United States.

 

 

scripts

Computerizing scripts: losing character?

With the use of the Internet and new technologies, we rely primarily on writing to communicate. This change has led to a renewed interest in graphemics, the linguistic study of writing. Yannis Haralambous, a researcher in automatic language processing and text mining at IMT Atlantique helped organize the /gʁafematik/ conference held in June 2018. In this article, we offer a look at the research currently underway in the field of graphemics and address questions about the future of computerized scripts in Unicode encoding, that will decide the future of languages and their characters…

 

Graphemics. Never heard this word before? That’s not surprising. The word refers to a linguistic study similar to phonology, except that it focuses on a language’s writing system, an area that was long neglected as linguistic study tended to focus more on the spoken than on the written word. Yet the arrival of new technologies, based strongly on written text, has changed all that and explains the renewed interest in the study of writing. “Graphemics is of interest to linguists just as much as it is to computer engineers, psychologists, researchers in communication sciences and typographers,” explains Yannis Haralambous, a researcher in computerized language processing and text mining at IMT Atlantique. “I wanted to bring all these different fields together by organizing the/gʁafematik/ conference on June 14 and 15, 2018 on graphemics in the 21st century.”

Among those in attendance were typographers who came to talk about the readability of typographic characters on a screen. Based on psycholinguistic field studies, they work to design fonts with specific characters for children or people with reading difficulties. At the same time, several researchers presented work on the use of emojis. This research reveals how these little pictographs no larger than letters can enrich the text with their emotional weight or replace words and even entire sentences, changing the way we write on social networks. In addition, the way we type with keyboards and the speed of our typing could be used to create signatures for biometric identification, with possible applications in cyber-security or in the diagnosis of disorders like dyslexia.

Yet among all the research presented at this conference, one leading actor in written language stole the show: Unicode. Anyone who types on a computer, whether they know it or not, is constantly dealing with this generalized and universal character encoding. But its goal of integrating all the world’s languages is not without consequences on the development of writing systems and particularly for “sparse” modern languages and dead languages.

Questioning the encoding system’s dominance  

Type a key on your keyboard, for example the letter “e”. A message is sent to your machine in the form of a number, a numeric code, received by the keyboard driver and immediately converted into digital information called Unicode character. This is the information that is then stored or transmitted. At the same time, the screen driver interprets this Unicode character and displays an image of the letter “e” in your text processing area. “If you change the virtual keyboard in your machine’s system preferences to Russian or Chinese, the same key on the same physical keyboard will produce different characters from other writing systems proposed by Unicode,” explains Yannis Haralambous.

Unicode is the result of work by a consortium of actors in the IT industry like Microsoft, Apple and IBM. Today it is the only encoding system that has been integrated worldwide. “Because it is the only system of its kind, it bears all the responsibility for how scripts develop,” the researcher explains. By accepting to integrate certain characters into its directory while rejecting others, Unicode controls the computerization of scripts. While Unicode’s dominance does not pose any real threat to “closed” writing systems like the French language, in which the number of characters does not vary, “open” writing systems are likely to be affected. “Every year, new characters are invented in the Chinese language, for example, and the Unicode consortium must follow these changes,” the researcher explains. But the most fragile scripts are those of dead and ancient languages. Once the encoding proposals made by reports from enthusiasts or specialists have been accepted by the Unicode consortium, they become final. The recorded characters can no longer be modified or deleted. For Yannis Haralambous, this is incomprehensible. “Errors are building up and will forever remain unchanged!

Also, while the physical keyboard does not naturally lend itself to the computerized transcription of certain languages, like Japanese or Chinese, with thousands of different characters, the methods used for phonetic input and the system that suggests the associated characters help by-pass this initial difficulty. Yet certain Indian scripts and Arabic script are clearly affected by the very principle of cutting the script up into characters: “The same glyphs can be combined in different ways by forming ligatures that change the meaning of the text,” Yannis Haralambous explains. But Unicode does not share this opinion: because they are incompatible with its encoding system, these ligatures are seen as purely aesthetic and therefore non-essential… This results in a loss of meaning for some scripts.  But Unicode also plays an important role in a completely different area.

A move towards the standardization of scripts?

While we learn spoken language spontaneously, written language is taught. “In France, the national education system has a certain degree of power and contributes to institutionalizing the written language, as do dictionaries or the Académie Française,” says Yannis Haralambous.  In addition to language policies, Unicode strongly affects how the written language develops. “Unicode does not seek to modify or to regulate, but rather implement. However, due to its constraints and technical choices, the system has the same degree of power as the institutions.” The researcher lists the trio that, in his opinion, offer a complete view of writing systems and their development: the linguistic study of the script, its institutionalization, and Unicode.

The example of Malayalam writing, in the Indian state of Kerala, clearly shows the important role character encoding and software play in how a population adopts a writing system. In the 1980s, the government implemented a reform to simplify the Malayalam ligature script to make it easier to print and computerize. Yet the population kept its handwritten script with complex ligatures. When Unicode emerged in 1990, Mac and Windows operating systems only integrated the simplified institutional version. What changed things was the simultaneous development of open source software, which made the script with traditional ligatures available on computers. “Today, after a period of uncertainty at the beginning of the 2000s, the traditional writing has made a strong comeback,” the researcher observes, emphasizing the unique nature of this case: “The writing reforms imposed by a government are usually definitive, unless a drastic political change occurs.”

Unicode Malayalam graphématique, scripts

Traditional Malayalam script. Credit: Navaneeth Krishnan S

 

While open source software and traditional writing won in the case of Malayalam script, are we not still headed towards the standardization of writing systems due to the computerization of languages? “Some experts claim that in a century or two, we will speak a mixture of different languages with a high proportion of English, but also Chinese and Arabic. And the same for writing systems, with Latin script finally taking over… But in reality, we just don’t know,” Yannis Haralambous explains. The researcher also points out that the transition to Latin script has been proposed in Japan and China for centuries without success.  “The mental patterns of Chinese and Japanese speakers are built on the characters: In losing the structure of characters, they would lose the construction of meaning…” While language does contribute to how we perceive the world, the role of the writing system in this process has yet to be determined. This represents a complex research problem to solve. As Yannis Haralambous explains, “it is difficult to separate spoken language from written language to understand which influences our way of thinking the most…”

GDPR

Four flagship measurements of the GDPR for the economy

Patrick Waelbroeck, Professor in Economic Sciences, Télécom ParisTech, Institut Mines-Télécom (IMT)

[divider style=”normal” top=”20″ bottom=”20″]

[dropcap]N[/dropcap]umerous scandals concerning data theft or misuse that have shaken the media in recent years have highlighted the importance of data protection. The implementation of the General Data Protection Regulation (GDPR) in Europe is designed to do just that through four flagship measurements: seals of trust, accountability, data portability and pseudonymization. We explain how.

Data: potential negative externalities

The misuse of data can lead to identity theft, cyberbullying, disclosure of private information, fraudulent use of bank details or, less serious but nevertheless condemnable, price discrimination or the display of unwanted ads whether they are targeted or not.

All these practices can be assimilated to “negative externalities”. In economics, a negative externality results from a failing in the market. It occurs when the actions of an economic actor have a negative effect on other actors with no offset linked to a market mechanism.

In the case of the misuse of data, negative externalities are caused for the consumer by companies that collect too much data in relation to the social optimum. In terms of economics, their existence justifies the protection of personal data.

The black box society

The user is not always aware of the consequences of data processing carried out by companies working in digital technology, and the collection of personal data can generate risks if an internet user leaves unintentional records in their browser history.

Indeed, digital technology completely changes the conditions of exchange because of the information asymmetry that it engenders, in what Frank Pasquale, a legal expert at the University of Maryland, calls the “black box society”: users of digital tools are unaware of both the use of their personal data and the volume of data exchanged by the companies collecting it. There are three reasons for this.

Firstly, it is difficult for a consumer to check how their data is used by the companies collecting and processing it, and it is even more difficult to know whether this use is compliant with legislation or not. This is even more true in the age of “Big Data”, where separate databases containing little personal information can be easily combined to identify a person. Secondly, an individual struggles to technically assess the level of computer security protecting their data during transmission and storage. Thirdly, information asymmetries undermine the equity and reciprocity in the transaction and create a feeling of powerlessness among internet users, who find themselves alone facing the digital giants (what sociologists call informational capitalism).

Seals and marks of trust

The economic stakes are high. In his work that earned him the Nobel Prize in Economic Sciences, George Akerlof showed that situations of information asymmetry could lead to the disappearance of markets. The digital economy is not immune to this theory, and the deterioration of trust creates risks that may lead some users to “disconnect”.

In this context, the GDPR strongly encourages (but does not require) seals and marks of trust that allow Internet users to better comprehend the dangers of the transaction. Highly common in the agri-food industry, these seals are marks such as logos, color codes and letters that prove compliance with regulations in force. Can we count on these seals to have a positive economic impact for the digital economy?

What impacts do seals have?

Generally speaking, there are three positive economic effects of seals: a signaling effect, price effect and an effect on quantity.

In economics, seals and other marks of confidence are apprehended by signaling theory. This theory aims to solve the problems of information asymmetry by sending a costly signal that the consumer can interpret as proof of good practice. The costlier the signal, the greater its impact.

Economic studies on seals show that they generally have a positive impact on prices, which in turn boosts reputation and quantities sold. In the case of seals and marks of trust for personal data protection, we can expect a greater effect on volume than on prices, in particular for non-retail websites or those offering free services.

However, existing studies on seals also show that the economic impacts are all the more substantial when the perceived risks are greater. In the case of agri-food certifications, health risks can be a major concern; but risks related to data theft are still little known about among the general public. From this point of view, seals may have a mixed effect.

Other questions also remain unanswered: should seal certification be carried out by public or private organizations? How should the process of certification be financed? How will the different seals coexist? Is there not a risk of confusion for internet users? If the seal is too expensive, is there not a danger that small businesses won’t be able to become certified?

In reality, less than 15% of websites among the top 50 for online audience rates currently display a data protection seal. Will GDPR change this situation?

Accountability and fines

The revelations of the Snowden affair on the large-scale surveillance by States have also created a sense of distrust towards operators of the digital economy. To the extent that, in 2015, 21% of Internet users were prepared not to share any information at all, whereas this was true of only 5% in 2009. Is there a societal cost to the “everything for free” era on the Internet?

Instances of customer data theft among companies such as Yahoo, Equifax, eBay, Sony, LinkedIn or, more recently, Cambridge Analytica and Exactis, number in the millions. These incidents are too rarely followed up by sanctions. The GDPR establishes a principle of accountability which forces companies to be able to demonstrate that their data processing is compliant with the requirements laid down by the GDPR. The regulation also introduces a large fine that can be as much as 4% of worldwide consolidated turnover in the event of failure to comply.

These measures are a step in the right direction because people trust an economic system if they know that unacceptable behavior will be punished.

Data portability

The GDPR establishes another principle with an economic aim: data portability. Like the mobile telephony sector where number portability has helped boost competition, the regulation aims to generate similar beneficial effects for internet users. However, there is a major difference between the mobile telephony industry and the internet economy.

Economies of scale in data storage and use and the existence of network externalities on multi-sided online platforms (platforms that serve as an intermediary between several groups of economic players) have created internet monopolies. In 2017, for example, Google processed more than 88% of online searches in the world.

In addition, the online storage of information benefits users of digital services by allowing them to connect automatically and save their preferences and browser history. This creates a “data lock-in” situation resulting from the captivity of loyal users of the service and characterized by high costs of changing. Where can you go if you leave Facebook? This situation allows monopolizing companies to impose conditions of use of their services, facilitating large-scale exploitation of customer data, sometimes at the users’ expense. Consequently, the relationship between data portability and competition is like a “Catch-22”: portability is supposed to create competition, but portability is not possible without competition.

Pseudonymization and explicit consent

The question of trust in data use also concerns the economic value of anonymity. In a simplistic theory, we can suppose that there is an arbitration between economic value on the one hand and the protection of privacy and anonymity on the other. In this theory there are two extreme situations: one in which a person is fully identified and likely to receive targeted offers, and another in which the person is anonymous. In the first case, there is maximum economic value, while in the second the data is of no value.

If we move along the scale towards the targeting end, economic value is increased to the detriment of the protection of privacy. Conversely, if we protect privacy, the economic value of the data is reduced. This theory does not account for the challenges raised by trust in data use. To develop a long-term client relationship, it is important to think in terms of risks and externalities for the customer. There is therefore an economic value to the protection of privacy, which revolves around long-term relationships, the notion of trust, the guarantee of freedom of choice, independence and the absence of discrimination.

The principles of pseudonymization and explicit consent in the GDPR adopt this approach. Nevertheless, the main actors have to play along and comply, something which still seems far from being the case: barely a month after the date of effect of the regulations, the Consumer Council of Norway (Forbrukerrådet), an independent organization funded by the Norwegian Government, accused Facebook and Google of manipulating internet users to encourage them to share their personal information.

Patrick Waelbroeck, Professor in Economic Sciences, Télécom ParisTech, Institut Mines-Télécom (IMT)

The original version of this article (in French) was published on The Conversation.

Also read on I’MTech :

[one_half]

[/one_half]

[one_half_last]

[/one_half_last]

Empathic

AI to assist the elderly

Projets européens H2020Caring and expressive artificial intelligence? This concept that seems to come straight from a man-machine romance like the movie “Her”, is in fact at the heart of a Horizon 2020 project called EMPATHIC. The project aims to develop software for a virtual and customizable coach for assisting the elderly. To learn more, we interviewed the project’s Scientific Director for Télécom SudParis and expert in voice recognition, Dijana Petrovska-Delacretaz.

[divider style=”normal” top=”20″ bottom=”20″]

The original version of this article (in French) was published on the Télécom SudParis website.

[divider style=”normal” top=”20″ bottom=”20″]

What is the goal of the Empathic project?

Dijana Petrovska-Delacretaz: The project’s complete title is “Empathic Expressive Advanced Virtual Coach to Improve Independent Healthy Life-Years of the Elderly”. The goal is to develop a virtually advanced, empathetic and expressive coach to assist the elderly in daily life. This interface equipped with artificial intelligence and featuring several options would be adaptable, customizable and available on several types of media: PC, smartphone or tablet. We want to use a wide range of audiovisual information and interpret it to take the users’ situation into account and try to respond and interact with them in the most suitable way possible to offer them assistance. This requires us to combine several types of technology, such as signal processing and biometrics.

How will you design this AI?

DPD: We will combine signal processing and speech, face and shape recognition using “deep networks” (networks of deep artificial neurons). In other words, we will reproduce our brain’s structure using computer and processor calculations. This new paradigm has become possible thanks to the massive increase in storage and calculation capacities. They allow us to do much more in much less time.

We also use 3D modeling technology to develop avatars with more expressive faces–whether it be a human, cat, robot, hamster or even a dragon–that can be adapted to the user’s preferences to facilitate the dialogue between the user and virtual coach.   The most interesting solution from a biometric and artificial intelligence standpoint is to include as many options as possible: from using the voice and facial expressions to recognizing emotions.  All of these aspects will help the virtual coach recognize its user and have an appropriate dialogue with him or her.

Why not create a robot assistant?

The robot NAO is used at Télécom SudParis in work on recognition and gesture interpretation.

The robot NAO is used at Télécom SudParis in work on recognition and gesture interpretation.

DPD: The software can be fully integrated into a robot like Nao, which is already used here on the HadapTIC and EVIDENT platforms. This is certainly a feasible alternative: rather than have a virtual coach on a computer or smartphone, the coach can be there in person. The advantage with a virtual coach is that it is much easier to bring with me on my tablet when I travel.

In addition, from a security standpoint, a robot like Nao is not allowed access everywhere. We really want to develop a system that is very portable and not bulky. The challenge is to combine all of these types of technology and make them work together so we can interact with the artificial intelligence based on scenarios that are best suited to an elderly individual on different types of devices.

Who is participating in the Empathic project?

DPD: The Universidad del Pais Vasco in Bilbao is coordinating the project and is contributing to the dialogue aspect. We are complementing this aspect by providing the technology for vocal biometrics and emotional and face recognition, as well as the avatar’s physical appearance. The industrial partners involved in the project are Osatek, Tunstall, Intelligent Voice and Acapela. Osatek is the largest Spanish company specializing in connected objects for patient monitoring and home care. Tunstall also specializes in this type of material. Intelligence Voice and Acapela, on the other hand, will primarily provide assistance in voice recognition and speech synthesis. In addition, Osatek and Intelligent Voice will be responsible for ensuring the servers are secure and storing the data. The idea is to create a prototype that they will then be able to use.

Finally, the e-Seniors association, the Seconda Universidad of Italy and the Oslo University Hospital will provide the 300 users-testers to test the EMPATHIC interface throughout the project. This provides us with significant statistical validity.

What are the major steps in the project?

DPD: The project was launched in November 2017 and our research will last three years. We are currently preparing for an initial acquisition stage which take place in the spring of 2018. For this first stage, we will make recordings with a “Wizard of Oz” system in the EVIDENT living lab at Télécom SudParis. With this system, we can simulate the virtual agent with a person who is located in another room in order to collect data on the interaction between the user and the agent. This system allows us to create more complex scenarios that our artificial intelligence will then take into account. This first stage will also provide us with additional audiovisual data and allow us to select the best personifications of virtual coaches to offer to users. I believe this is important.

What are the potential opportunities and applications?

DPD: The research project is intended to produce a prototype. The idea behind having industrial partners is for them to quickly obtain and adapt this prototype to suit their needs. The goal of this project is not to produce a specific innovation, but rather to combine and adapt all these different types of technology so that they reach their potential through specific cases—primarily for assisting the elderly.

Another advantage of EMPATHIC is that it reveals many possible applications in other areas: including video games and social networks.  A lot of people interact with avatars in virtual worlds today–like in the video game Second Life, which is where one of our avatars, Bob the Hawaiian, comes from. EMPATHIC could therefore definitely be adapted outside the medical sector.

Do not confuse virtual and reality

The “Uncanny Valley” is a central concept in artificial intelligence and robotics. It theorizes that if a robot or AI possess a human form (whether physical or virtual), it should not resemble humans too closely–unless this resemblance is perfectly achieved from every angle.  “As soon as there is a strong resemblance with humans, the slightest fault is strange and disturbing. In the end, it becomes a barrier in the interactions with the machine,” explains Patrick Horain, a research engineer at Télécom SudParis, specialized in digital imaging.

Dijana Petrovska-Delacretaz has integrated this aspect and is developing the EMPATHIC coaches accordingly: “It is important to keep this in mind. In general, it is a scientific challenge to make a photo-realistic face that actually looks like a loved one without being able to distinguish the real from the fake. But in the context of our project, it is a problem. It could be disturbing or confusing for an elderly person to interact with an avatar that looks like someone real or even a loved one. It is always preferable to propose virtual coaches that are clearly not human.”

[divider style=”normal” top=”20″ bottom=”20″]

Do not confuse virtual and reality

The “Uncanny Valley” is a central concept in artificial intelligence and robotics. It theorizes that if a robot or AI possess a human form (whether physical or virtual), it should not resemble humans too closely–unless this resemblance is perfectly achieved from every angle.  “As soon as there is a strong resemblance with humans, the slightest fault is strange and disturbing. In the end, it becomes a barrier in the interactions with the machine,” explains Patrick Horain, a research engineer at Télécom SudParis, specialized in digital imaging.

Dijana Petrovska-Delacretaz has integrated this aspect and is developing the EMPATHIC coaches accordingly: “It is important to keep this in mind. In general, it is a scientific challenge to make a photo-realistic face that actually looks like a loved one without being able to distinguish the real from the fake. But in the context of our project, it is a problem. It could be disturbing or confusing for an elderly person to interact with an avatar that looks like someone real or even a loved one. It is always preferable to propose virtual coaches that are clearly not human.”

[divider style=”normal” top=”20″ bottom=”20″]