interactions

How will we interact with the virtual reality?

The ways we interact with technology change over time and adapt to fit different contexts, bringing new constraints and possibilities. Jan Gugenheimer is a researcher at Télécom Paris and is particularly fascinated by interactions between humans and machines and the way they develop. In this interview, he introduces the questions surrounding our future interactions with virtual reality.

 

What is the difference between virtual, mixed and augmented reality?

Jan Gugenheimer: The most commonly cited definition presents a spectrum. Reality as we perceive it is situated on the far-left and virtual reality on the far-right of the spectrum, with mixed reality in the middle. We can therefore create variations within this spectrum. When we leave more perceptions of actual reality, we move to the left of the spectrum. When we remove real aspects by adding artificial information, we move towards virtual reality. Augmented reality is just one point on this spectrum. In general, I prefer to call this spatial computing, with the underlying paradigm that information is not limited to a square space.

How do you study human-machine interactions?

JG: Our approach is to study what will happen when this technology leaves the laboratory. Computers have already made this transition: they were once large, stationary machines. A lot has changed. We now have smartphones in our pockets at all times. We can see what has changed: the input and interface. I no longer use a keyboard to enter information into my phone (input), this had to change to fit a new context. User feedback has also changed: a vibrating computer wouldn’t make sense. But telephones need this feature. These changes are made because the context changes. Our work is to explore these interactions and developments and the technology’s design.

Does this mean imagining new uses for virtual reality?

JG: Yes, and this is where our work becomes more difficult, because we have to make predictions. When we think back to the first smartphone, IBM Simon, nobody would have been able to predict what it would become, the future form of the object, or its use cases. The same is true for virtual reality. We look at the headset and think “this is our very first smartphone!” What will it become? How will we use it in our daily lives?

Do you have a specific example of these interactions?

JG: For example, when we use a virtual reality headset, we make movements in all directions. But imagine using this headset in a bus or public transport. We can’t just hit the people around us. We therefore need to adapt the way information is inputted into the technology. We propose controlling the virtual movements with finger and eye movements. We must then study what works best, to what extent this is effective in controlling the movements, the size, and whether the user can perceive his or her movements. This is a very practical aspect but there are also psychological aspects, issues of immersion into virtual reality, and user fatigue.

Are there risks of misuse?

JG: In general, this pertains to design issues since the design of the technology promotes a certain type of use. For current media, there are research topics on what we call dark design. A designer creates an interface taking certain psychological aspects into account to encourage the application to be used in a certain way, of which you are probably not aware. If you use Twitter, for example, you can “scroll” through an infinite menu, and this compels you to keep consuming.

Some then see technology as negative, when in fact it is the way it is designed and implemented that makes it what it is. We could imagine a different Twitter, for example, that would display the warning “you have been connected for a long time, time to take a break”. We wonder what this will look like for spatial computing technology. What is the equivalent of infinite scrolling for virtual reality? We then look for ways to break this cycle and protect ourselves from these psychological effects. We believe that, because these new technologies are still developing, we have the opportunity to make them better.

Should standards be established to require ethically acceptable designs?

JG: That is a big question, and we do not yet have the answer. Should we create regulations? Establish guiding principles? Standards? Raise awareness? This is an open and very interesting topic. How can we create healthier designs? These dark designs can also be used positively, for example to limit toxic behavior online, which makes it difficult to think of standards. I think transparency is crucial. Each person who uses these techniques must make it public, for example, by adding the notice “this application uses persuasive techniques to make you consume more.” This could be a solution, but there are still a lot of open questions surrounding this subject.

Can virtual reality affect our long-term behavior?

JG: Mel Slater, a researcher at the University of Barcelona, is a pioneer in this type of research. To strengthen a sense of empathy, for example, a man could use virtual reality to experience harassment from a woman’s point of view. This offers a new perspective, a different understanding. And we know that exposure to this type of experience can change our behavior outside the virtual reality situation. There is a potential for possibly making us less sexist, less racist, but this also ushers in another set of questions. What if someone uses it for the exact opposite purposes? These are delicate issues involving psychology, design issues and questions about the role we, as scientists and designers, play in the development of these technologies. And I think we should think about the potential negative uses of the technology we are bringing into the world.

 

Tiphaine Claveau for I’MTech

contact tracing applications

COVID-19: contact tracing applications and new conversational perimeter

The original version of this article (in French) was published in the quarterly newsletter of the Values and Policies of Personal Information Chair (no. 18, September 2020).

[divider style=”normal” top=”20″ bottom=”20″]

[dropcap]O[/dropcap]n March 11, 2020, the World Health Organization officially declared that our planet was in the midst of a pandemic caused by the spread of Covid-19. First reported in China, then Iran and Italy, the virus spread critically and quickly as it was given an opportunity. In two weeks, the number of cases outside China increased 13-fold and the number of affected countries tripled [1].

Every nation, every State, every administration, every institution, every scientist and every politician, every initiative and every willing public and private actors were called on to think and work together to fight this new scourge.

From the manufacture of masks and respirators to the pooling of resources and energy to find a vaccine, all segments of society joined altogether as our daily lives were transformed, now governed by a large-scale deadly virus. The very structure of the way we operate in society was adapted in the context of an unprecedented lockdown period.

In this collective battle, digital tools were also mobilized.

As early as March 2020, South Korea, Singapore and China announced the creation of contact tracing mobile applications to support their health policies [2].

Also in March, in Europe, Switzerland reported that it was working on the creation of the “SwissCovid” application, in partnership with EPFL University in Lausanne and ETH University in Zurich. This contact tracing application pilot project was eventually implemented on June 25. SwissCovid is designed to notify users who have been in extended contact with someone who tested positive for the virus, in order to control the spread of the virus. To quote the proponents of the application, it is “based on voluntary registration and subject to approval from Swiss Parliament.” Another noteworthy feature is that it is “based on a decentralized approach and relies on application programming interfaces (APIs) from Google and Apple.”

France, after initially dismissing this type of technological component through the Minister of the Interior, who stated that it was “foreign to French culture,” eventually changed is position and created a working group to develop a similar app called “StopCovid”.

In total, no less than 33 contact tracing apps were introduced around the world [3]. With a questionable success.

However, many voices in France, Europe and around the world, have spoken out against the implementation of this type of system, which could seriously infringe on basic rights and freedoms, especially regarding individual privacy and freedom of movement. Others have voiced concern about the possible control of this personal data by the GAFAM or States that are not committed to democratic values.

The security systems for these applications have also been widely debated and disputed, especially the risks of facing a digital virus, in addition to a biological one, due to the rushed creation of these tools.

The President of the CNIL (National Commission for Information Technology and Civil Liberties) Marie-Laure Denis, echoed the key areas for vigilance aimed at limiting potential intrusive nature of these tools.

  • First, through an opinion issued on April 24, 2020 on the principle of implementing such an application, the CNIL stated that, given the exceptional circumstances involved in managing the health crisis, it considered the implementation of StopCovid feasible. However, the Commission expressed two reservations: the application should serve the strategy of the end-of-lockdown plan and be designed in a way that protects users’ privacy [4].
  • Then, in its opinion of May 25, 2020, urgently issued for a draft decree related to the StopCovid mobile app [5], the CNIL stated that the application “can be legally deployed as soon as it is found to be a tool that supports manual health investigations and enables faster alerts in the event of contact cases with those infected with the virus, including unknown contacts.” Nevertheless, it considered that “the real usefulness of the device will need to be more specifically studied after its launch. The duration of its implementation must be dependent on the results of this regular assessment.”

From another point of view, there were those who emphasized the importance of digital solutions in limiting the spread of the virus.

No application can heal or stop Covid. Only medicine and a possible vaccine can do this. However, digital technology can certainly contribute to health policy in many ways, and it seems perfectly reasonable that the implementation of contact tracing applications came to the forefront.

What we wish to highlight here is not so much the arguments for or against the design choices in the various applications (centralized or decentralized, sovereign or otherwise) or even against their very existence (with, in each case, questionable and justified points), but the conversational scope that has played a part in all the debates surrounding their implementation.

While our technological progress is impressive in terms of scientific and engineering accomplishments, our capacity to collectively understand interactions between digital progress and our world has always raised questions within the Values and Policies of Personal Information research Chair.

It is, in fact, the very purpose of its existence and the reason why we share these issues with you.

In the midst of urgent action taken on all levels to contain, manage and–we hope–reverse the course of the pandemic, the issue of contact tracing apps has caused us to realize that the debates surrounding digital technology have perhaps finally moved on to a tangible stage involving collective reflection that is more intelligent, democratic and respectful of others.

In Europe, and also in other countries in the world, certain issues have now become part of our shared basis for conversation. These include personal data protection, individual privacy, technology that should be used, the type of data collected and its anonymization, application security, transparency, the availability of their source codes, their operating costs, whether or not to centralize data, their relationship with private or State monopolies, the need in duly justified cases for the digital tracking of populations, independence from the GAFAM [6] and United States [7] (or other third State).

In this respect, given the altogether recent nature of this situation, and our relationship with technological progress, which is no longer deified nor vilified, nor even a fantasy from an imaginary world that is obscure for many, we have progressed. Digital technology truly belongs to us. We have collectively made it ours, moving beyond both blissful techno-solutionism and irrational technophobia.

If you are not yet familiar with this specific subject, please reread the Chair’s posts on Twitter dating back to the start of the pandemic, in which we took time to identify all the elements in this conversational scope pertaining to contact tracing

The goal is not to reflect on these elements as a whole, or the tone of some of the colorful and theatrical remarks, but rather something we see as new: the quality and wealth of these remarks and their integration in a truly collective, rational and constructive debate.

It was about time!

On August 26, 2020, French Prime Minister Jean Castex made the following statement: “StopCovid did not achieve the desired results, perhaps due to a lack of communication. At the same time, we knew in advance that conducting the first full-scale trial run of this type of tool in the context of this epidemic would be particularly difficult.” [8] Given the human and financial investment, it is clear that the cost-effectiveness ratio does not help the case for StopCovid (and similar applications in other countries) [9].

Further revelations followed when the CNIL released its quarterly opinion on September 14, 2020. While, for the most part, the measures implemented (SI-DEP and Contact Covid data files, the StopCovid application) protected personal data, the Commission identified certain poor practices. It contacted the relevant agencies to ensure they would become compliant in these areas as soon as possible.

In any case, the conclusive outcome that can be factually demonstrated, is that remarkable progress has been made in our collective level of discussion, our scope for conversation in the area of digital technology. We are asking (ourselves) the right questions. Together, we are setting the terms for our objectives: what we can allow, and what we must certainly not allow.

This applies to ethical, legal and technical aspects.

It’s therefore political.

Claire Levallois-Barth and Ivan Meseguer
Co-founders of the Values and Policies of Personal Information research chair

 

 

5G

5G: what it is? How does it work?

Xavier Lagrange, Professor of network systems, IMT Atlantique – Institut Mines-Télécom

[divider style=”normal” top=”20″ bottom=”20″]

5G is the fifth generation of standards for mobile networks. Although this technology has fueled many societal debates on its environmental impact, possible health effects, and usefulness, here we will focus on the technological aspects.

How does 5G work? Is it a true technological disruption or simple an improvement on past generations?

Back to the Past

Before examining 5G in detail, let’s take a moment to consider the previous generations. The first (1G) was introduced in the 1980s and, unlike the following generations, was an analogue system. The primary application was car telephones.

2G was introduced in 1992, with the transition to a digital system, and telephones that could make calls and send short messages (SMS). This generation also enabled the first very low speed data transmission, at the speed of the first modems for internet access.

2000 to 2010 was the 3G era. The main improvement was faster data transmission, reaching a rate of a few megabits per second with 3G⁺, allowing for smoother internet browsing. This era also brought the arrival of touch screens, causing data traffic and use of the networks to skyrocket.

Then, from 2010 until today, we transitioned to 4G, with much faster speeds of 10 megabits per second, enabling access to streaming videos.

Faster and Faster

Now we have 5G. With the same primary goal of accelerating data transmission, we should be able to reach an average speed of 100 megabits per second, with peaks at a few gigabits per second under ideal circumstances (10 to 100 times faster than 4G).

This is not a major technological disruption, but an improvement on the former generation. The technology is based on the same principle as 4G: the same waveforms will be used, and the same principle of transmission. It is called OFDM and enables parallel transmission: through mathematical processing, it can perform a large quantity of transmissions on nearby frequencies. It is therefore possible to transmit more information at once. With 4G, we were limited to 1,200 parallel transmissions. With 5G, we will reach 3,300 with a greater speed for each transmission.

Initially, 5G will complement 4G: a smartphone will be connected to 4G and transmission with 5G will only occur if a high speed is necessary and, of course, if 5G coverage is available in the area.

A more flexible network

The future network will be configurable, and therefore more flexible. Before, dedicated hardware was used to operate the networks. For example, location databases needed to contact a mobile subscriber were manufactured by telecom equipment manufacturers.

In the long term, the 5G network will make much greater use of computer virtualization technologies: the location database will be a little like an extremely secure web server that can run on one or several PCs. The same will be true for the various controllers which will guarantee proper data routing when the subscribers move to different areas in the network. The advantage is that the operator will be able to restart the virtual machines, for example in order to adapt to increased demand from users in certain areas or at certain times and, on the other hand, reduce the capacity if there is lower demand.

It will therefore be possible to reconfigure a network when there is a light load (at night for example) by combining controllers and databases in a limited number of control units, thus saving energy.

New antennas

As we have seen, 5G technology is not very different from the previous technology. It would even have been possible to develop using the same frequencies used for 3G networks.

The operators and government agencies that allocate frequencies chose to use other frequencies. This choice serves several purposes: it satisfies an ever-growing demand for speed and does not penalize users who would like to continue using older generations. Accommodating the increase in traffic requires the Hertzian spectrum (i.e., frequencies) dedicated to mobile networks to be increased. This is only possible with higher frequency ranges: 3.3 GHz, coming very soon, and likely 26 GHz in the future.

Finally, bringing new technology into operation requires a test and fine-tuning phase before the commercial launch. Transitioning to 5G on a band currently used for other technology would significantly reduce the quality perceived by users (temporarily for owners of 5G telephones, definitively for others) and create many dissatisfied customers.

There is no need to increase the number of antenna sites in order to emit new frequencies, but new antennas must be added to existing posts. These posts host a large number of small antennas and, thanks to signal processing algorithms, have a more directive coverage that can be closely controlled. The benefit is more efficient transmission in terms of speed and energy.

For a better understanding, we could use the analogy of flashlights and laser pointers. The flashlight, representing the old antennas, sends out light in all directions in a diffuse manner,  consuming a large amount of electricity and lighting a relatively short distance. The laser, on the contrary, consumes less energy to focus the light farther away, but in a very narrow line. Regardless of the antenna technology, the maximum power of the electromagnetic field produced in any direction will not be allowed to exceed maximum values for health reasons.

So, if these new antennas consume less energy, is 5G more energy efficient? We might think so since each transmission of information will consume less energy. Unfortunately, with the increasing number of exchanges, it will consume even more energy overall. Furthermore, the use of new frequencies will necessarily lead to an increase in the electric consumption of the operators.

New applications

When new technology is launched on the market, it is hard to predict all its applications. They often appear later and are caused by other stakeholders. That said, we can already imagine several possibilities.

5G will allow for a much lower latency between the sending and receiving of data. Take the example of a surgeon operating remotely with a mechanical arm. When the robot touches a part of the body, the operator will be almost instantly (a few ms for a distance of a few kilometers) be able to “feel” the resistance of what he is touching and react accordingly, as if he were operating with his own hands. Low latency is also useful for autonomous cars and remote-controlled vehicles.

For industrialists, we could imagine connected and automated factories in which numerous machines communicate together and with a global network.

5G is also one of the technologies that will enable the development of the internet of things. A city equipped with sensors can better manage a variety of aspects, including public lighting, the flow of vehicles, and garbage collection. Electricity can also be better controlled, with the real-time adaptation of consumption to production through several small, interconnected units called a smart grid.

For the general public, the network’s increased speed will allow them to download any file faster and stream premium quality videos or watch them live.

[divider style=”dotted” top=”20″ bottom=”20″]

Xavier Lagrange, Professor of network systems, IMT Atlantique – Institut Mines-Télécom

This article from The Conversation is republished here under a Creative Commons license. Read the original article (in French).

 

environmental impact

20 terms for understanding the environmental impact of digital technology

While digital technology plays an essential role in our daily lives, it also a big consumer of resources. To explore the compatibility between the digital and environmental transitions, Institut Mines-Télécom and Fondation Mines-Télécom are publishing their 12th annual brochure entitled Numérique : Enjeux industriels et impératifs écologiques (Digital Technology: Industrial Challenges and Environmental Imperatives). This glossary of 20 terms taken from the brochure provides an overview of some important notions for understanding the environmental impact of digital technology.  

 

  1. CSR: Corporate Social Responsibility — A voluntary process whereby companies take social and environmental concerns into account in their business activities and relationships with partners.
  2. Data centers — Infrastructure bringing together the equipment required to operate an information system, such as equipment for data storage and processing.
  3. Eco-design — A way to design products or services by limiting their environmental impact as much as possible, and using as few non-renewable resources as possible.
  4. Eco-modulation — Principle of a financial bonus/penalty applied to companies based on their compliance with good environmental practices. Primarily used in the waste collection and management sector to reward companies that are concerned about the recyclability of their products.
  5. Energy mix — All energy sources used in a geographic area, combining renewable and non-renewable sources.
  6. Environmental responsibility — Behavior of a person, group or company who seeks to act in accordance with sustainable development principles.
  7. Green IT — IT practices that help reduce the environmental footprint of an organization’s operations.
  8. LCA: Lifecycle Analysis — Tool used to assess the overall environmental impacts of a product or service, throughout its phases of existence, by taking into consideration a maximum of incoming and outgoing flows of resources and energy over this period.
  9. Mine tailings — The part of the rock that is left over during mining operations since it does not have enough of the target material to be used by industry.
  10. Mining code — Legal code regulating the exploration and exploitation of mineral resources in France, dated from 2011, based on the fundamental principles of Napoleonic law of 1810.
  11. Paris Climate Agreement — International climate agreement established in 2015 following negotiations held during the Paris Climate Conference (COP21). Among other things, it sets the objective to limit global warming to 2 degrees by 2100, in comparison to preindustrial levels.
  12. PUE: Power Usage Effectiveness — Ratio between the total energy consumed by a data center to the total energy consumed by its servers alone.
  13. Rare earths— Group of 17 metals, many of which have unique properties that make them widely used in the digital sector.
  14. Rebound effect — Increased use following improvements in environmental performance (reduced energy consumption or use of resources).
  15. Responsible innovation — Way of thinking about innovation with the purpose of addressing environmental or social challenges, while considering the way the innovation itself is sought or created.
  16. RFID: Radio-frequency identification — Very short distance communication method based on micro-antennas in the form of tags.
  17. Salt flat — High salt desert, sometimes submerged in a thin layer of water, containing lithium which is highly sought after to make batteries for electronic equipment.
  18. Virtualization — The act of creating a virtual an IT action, usually through a service provider, in order to save on IT equipment costs.
  19. WEEE: Waste Electrical and Electronic Equipment — All waste from products operated using electrical current and therefore containing electric or electronic components.
  20. 5G Networks — 5th generation mobile networks, following 4G, will make it possible to improve mobile data speed and present new possibilities for using mobile networks in new sectors.
crise, gestion de crise, crisis management

Crisis management: better integration of citizens’ initiatives

Caroline Rizza, Télécom Paris – Institut Mines-Télécom

[divider style=”normal” top=”20″ bottom=”20″]

[dropcap]A[/dropcap]s part of my research into the benefits of digital technologies in crisis management and in particular the digital skills of those involved in a crisis (whether institutions or grassroots citizens), I had the opportunity to shadow the Fire and Emergency Department of the Gard (SDIS) in Nîmes, from 9 to 23 April 2020, during the COVID-19 health crisis.

This immersive investigation enabled me to fine-tune my research hypotheses on the key role of grassroots initiatives in crises, regardless of whether they emerge in the common or virtual public space.

Social media to allow immediate action by the public

So called “civil security” crises are often characterized by their rapidity (a sudden rise to a “peak”, followed by a return to “normality”), uncertainties, tensions, victims, witnesses, etc.

The scientific literature in the field has demonstrated that grassroots initiatives appear at the same time as the crisis in order to respond to it: during an earthquake or flood, members of the public who are present on-site are often the first to help the victims, and after the crisis, local people are often the ones who organize the cleaning and rebuilding of the affected area. During the Nice terror attacks of July 2016, for example, taxi-drivers responded immediately by helping to evacuate the people present on the Promenade des Anglais. A few months earlier, during the Bataclan attacks in 2015, Parisians opened their doors to those who could not go home and used the hashtag #parisportesouvertes (parisopendoors). Genoa experienced two rapid and violent floods in 1976 and 2011; on both occasions, young people volunteered to clean up the streets and help shop owners and inhabitants in the days that followed the event.

There has been an increase in these initiatives, following the arrival of social media in our daily lives, which has helped them emerge and get organized online as a complement to actions that usually arise spontaneously on the field.

My research lies within the field of “crisis informatics”. I am interested in these grassroots initiatives which emerge and are organized through social media, as well as the issues surrounding their integration into crisis management. How can we describe these initiatives? What mechanisms are they driven by? How does their creation change crisis management? Why should we integrate them into crisis response?

Social media as an infrastructure for communication and organization

Since 2018, I have been coordinating the ANR MACIV project (Citizen and volunteer management: the role of social media in crisis scenarios). We have been looking at all the aspects of social media in crisis management: the technological aspect with the tools which can automatically supply the necessary information to institutional players; the institutional aspect of the status of the information coming from social media and its use in the field; the grassroots aspect, linked to the mechanisms involved in the creation and sharing of the information on social media and the integration of grassroots initiatives into the response to the crisis.

We usually think of social media as a means of communication used by institutions (ministries, prefectures, municipalities, fire and emergency services) to communicate with citizens top-down and improve the situational analysis of the event through the information conveyed bottom-up from citizens.

The academic literature in the field of  “crisis informatics” has demonstrated the changes brought by social media, and how citizens have used them to communicate in the course of an event, provide information or organize to help.

On-line and off-line volunteers

We generally distinguish between “volunteers” in the field and online. As illustrated above, volunteers who are witnesses or victims of an event are often the first to intervene spontaneously, while social media focuses on organizing online help. This distinction can help us understand how social media have become a means of expressing and organizing solidarity.

It is interesting to note that certain groups of online volunteers are connected through agreements with public institutions and their actions are coordinated during an event. In France, VISOV (international volunteers for virtual operation support) is the French version of the European VOST (Virtual Operations Support Team); but we can also mention other groups such as the WAZE community.

Inform and organize

There is therefore an informational dimension and an organizational dimension to the contribution of social media to crisis management.

Informational in that the content that is published constitutes a source of relevant information to assess what is happening on site: for example, fire officers can use online social media, photos and videos during a fire outbreak, to readjust the means they need to deploy.

And organizational in that aim is to work together to respond to the crisis.

For example, creating a Wikipedia page about an ongoing event (and clearing up uncertainties), communicating pending an institutional response (Hurricane Irma, Cuba, in 2017), helping to evacuate a place (Gard, July 2019), taking in victims (Paris, 2015; Var, November 2019), or helping to rebuild or to clean a city (Genoa, November 2011).

crise

Screenshot of the Facebook page of VISOV to inform citizens of available accommodation following the evacuation of certain areas in the Var in December 2019. VISOV Facebook page

An increased level of organization

During my immersion within the SDIS of the Gard as part of the management of the COVID-19 crisis, I had the chance to discuss and observe the way in which social media were used to communicate with the public (reminding them of preventative measures and giving them daily updates from regional health agencies), as well as to integrate some grassroots initiatives.

Although the crisis was a health crisis, it was also one of logistics. Many citizens (individuals, businesses, associations, etc.) organized to support the institutions: sewing masks or making them with 3D printers, turning soap production into hand sanitizer production, proposing to translate information on preventative measures into different languages and sharing it to reach as many citizens as possible; these were all initiatives which I came across and which helped institutions organize during the peak of the crisis.

crise

Example of protective visors made using 3D printers for the SDIS 30.

 

The “tunnel effect”

However, the institutional actors I met and interviewed within the framework of the two studies mentioned above (SDIS, Prefecture, Defense and Security Zone, DGSCGC) all highlighted the difficulty of taking account of information shared on social media – and grassroots initiatives – during crises.

The large number of calls surrounding the same event, the excess information to be dealt with and the gravity of the situation mean that the focus has to be on the essentials. These are all examples of the “tunnel effect”, identified by these institutions as one of the main reasons for the difficulty of integrating these tools into their work and these actions into their emergency response.

The information and citizen initiatives which circulate on social media simultaneously to the event may therefore help the process of crisis management and response, but paradoxically, they can also make it more difficult.

Then there is also the sharing through social media of rumors and fake news, especially when there is a gap in the information or contradictory ideas linked to an event (go to page Wikipedia during the COVID-19 crisis).

How and why should we encourage this integration?

Citizen initiatives have impacted institutions horizontally in their professional practices.

My observation of the management of the crisis within the SDIS 30 enabled me to go one step further and put forward the hypothesis that another dimension is slowing down the integration of these initiatives which emerge in the common or virtual public space: it implies placing the public on the same level as the institution; in other words, these initiatives do not just have an “impact” horizontally on professional practices and their rules (doctrines), but this integration requires the citizen to be recognized as a participant in the management and the response to the crisis.

There is still a prevailing idea that the public needs to be protected, but the current crisis shows that the public also want to play an active role in protecting themselves and others.

The main question that then arises is that of the necessary conditions for this recognition of citizens as participants in the management and response to the crisis.

Relying on proximity

It is interesting to note that at a very local level, the integration of the public has not raised problems and on the contrary it is a good opportunity to diversify initiatives and recognize each of the participants within the region.

However, at a higher level in the operational chain of management, this poses more problems because of the commitment and responsibility of institutions in this recognition.

My second hypothesis is therefore as follows: the close relations between stakeholders within the same territorial fabric allow better familiarity with grassroots players, thereby fostering mutual trust – this trust seems to me to be the key to success and explains the successful integration of grassroots initiatives in a crisis, as illustrated by the VISOV or VOST.

[divider style=”dotted” top=”20″ bottom=”20″]

The original version of this article (in French) was published on The Conversation.
By Caroline Rizza, researcher in information sciences at Télécom Paris.

Datafarm

Datafarm: low-carbon energy for data centers

The start-up Datafarm proposes an energy solution for low-carbon digital technology. Within a circular economy system, it powers data centers with energy produced through methanization, by installing them directly on cattle farms.

 

When you hear about clean energy, cow dung probably isn’t the first thing that comes to mind. But think again! The start-up Datafarm, incubated at IMT Starter, has placed its bets on setting up facilities on farms to power its data centers through methanization. This process generates energy from the breaking down of animal or plant biomass by microorganisms under controlled conditions. Its main advantages are that it makes it possible to recover waste and lower greenhouse emissions by offering an alternative to fossil fuels. The result is a green energy in the form of biogas.

Waste as a source of energy

Datafarm’s IT infrastructures are installed on cattle farms that carry out methanization. About a hundred cows can fuel a 500kW biogas plant, which is the equivalent of 30 tons of waste per day (cow dung, waste from milk, plants etc.). This technique generates a gas, methane, of which 40% is converted into electricity by turbines and 60% into heat. Going beyond the state of the art, Datafarm has developed a process to convert the energy produced through methanization…into cold!  This helps respond to the problem of cooling data centers. “Our system allows us to reduce the proportion of electricity needed to cool infrastructures to 8%, whereas 20 to 50% is usually required,” explains Stéphane Petibon, the founder of the start-up.

The heat output produced by the data centers is then recovered in an on-site heating system. This allows farmers to dry hay to feed their livestock or produce cheese. Lastly, farms no longer need fertilizer from outside sources since the residue from the methanization process can be used to fertilize the fields. Datafarm therefore operates within a circular economy and self-sufficient energy system for the farm and the data center.

A service to help companies reduce carbon emissions

A mid-sized biogas plant (500 kW) fueling the start-up’s data centers reduces CO2 emissions by 12,000 tons a year – the equivalent of the annual emissions of 1,000 French people. “Our goal is to offer a service for low-carbon, or even negative-carbon, data centers and to therefore offset the  greenhouse gas emissions of the customers who host their data with us,” says Stéphane Petibon.

Every four years, companies with over 500 employees (approximately 2,400 in France) are required to publish their carbon footprint, which is used to assess their CO2 emissions as part of the national environmental strategy to reduce the impact of companies. The question, therefore, is no longer whether they need to reduce their carbon footprint, but how to do so. As such, the start-up provides an ecological and environmental argument for companies who need to decarbonize their operations.  “Our solution makes it possible to reduce carbon dioxide emissions by 20 to 30 % through an IT service for which companies’ needs grow every year,” says  Stéphane Petibon.

The services offered by Datafarm range from data storage to processing.  In order to respond to a majority of the customers’ demand for server colocation, the start-up has designed its infrastructures as ready-to-use modules inserted into containers hosted at farms. An agile approach that allows them to build their infrastructures based on customers’ needs and prior to installation. The data is backed up at another center powered by green energy near Amsterdam (Netherlands).

Innovations on the horizon

The two main selection criteria for farms are the power of their methanization and their proximity to a fiber network . “The French regions have already installed fiber networks in a significant portion of territories, but these networks have been neglected and are inoperative. To activate them, we’re working with the telecom operators who cover France,” explains Stéphane Petibon. The first two infrastructures, in Arzal in Brittany and in Saint-Omer in the Nord department, meet all the criteria and will be put into use in September and December 2020 respectively. The start-up plans to host up to 80 customers per infrastructure and plans to have installed seven infrastructures throughout France by the end of 2021.

To achieve this goal, the start-up is conducting research and development on network redundancy  issues to ensure service continuity in the event of a failure. It is also working on developing an energy storage technique that is more environmentally-friendly than the batteries used by the data centers.  The methanization reaction can also generate hydrogen, which the start-up plans to store to be used as a backup power supply for its infrastructures. In addition to the small units, Datafarm is working with a cooperative of five farmers to design an infrastructure that will have a much larger hosting and surface capacity than its current products.

Anaïs Culot.

[box type=”info” align=”” class=”” width=””]This article was published as part of Fondation Mines-Télécom’s 2020 brochure series dedicated to sustainable digital technology and the impact of digital technology on the environment. Through a brochure, conference-debates, and events to promote science in conjunction with IMT, this series explores the uncertainties and challenges of the digital and environmental transitions.[/box]

data sharing

Data sharing, a common European challenge

Promoting data sharing between economic players is one of Europe’s major objectives via its digital governance strategy. To accomplish this, there are two specific challenges to be met. Firstly, a community must be created around data issues, bringing together various stakeholders from multiple sectors. Secondly, the technological choices implemented by these stakeholders must be harmonised.

 

‘If we want more efficient algorithms, with qualified uncertainty and reduced bias, we need not only more data, but more diverse data’, explains Sylvain Le Corff. This statistics researcher at Télécom SudParis thus raises the whole challenge around data sharing. This need applies not only to researchers. Industrial players must also strengthen their data with that from their ecosystem. For instance, an energy producer will benefit greatly from industrial data sharing with suppliers or consumer groups, and vice versa. A car manufacturer will become all the more efficient with more data sources from their sub-contractors.

The problem is that this sharing of data is far from being a trivial operation. The reason lies in the numerous technical solutions that exist to produce, store and use data. The long-standing and over-riding idea for economic players was to try to exploit their data themselves, and each organisation therefore made personal choices in terms of architecture, format or data-related protocols. An algorithm developed to exploit data sets in a specific format cannot use data packaged in another format. This then calls for a major harmonisation phase.

‘This technical aspect is often under-estimated in data sharing considerations’, Sylvain Le Corff comments. ‘Yet we are aware that there is a real difficulty with the pre-treatment operation to harmonise data.’ The researcher quotes the example of automatic language analysis, a key issue for artificial intelligence, which relies on the automatic processing of texts from multiple sources: raw texts, texts generated by audio or video documents, or texts derived from other texts, etc. This is the notion of multi-modality. ‘The plurality of sources is well-managed in the field, but the manner in which we oversee this multi-modality can vary within the same sector.’ Two laboratories or two companies will therefore not harmonise their data in the same way. In order to work together, there is an absolute need to go through this fastidious pre-treatment, which can hamper collaboration.

A European data standard

Olivier Boissier, a researcher in artificial intelligence and inter-operability at Mines Saint-Étienne, adds another factor to this issue: ‘The people who help to produce or process data are not necessarily data or AI specialists. In general, they are people with high expertise in the field of application, but don’t always know how to open or pool data sets.’ Given such technical limitations, a promising approach consists in standardising practices. This task is being taken on by the International Data Spaces Association (IDSA), whose role is to promote data sharing on a global scale, and more particularly in Europe.

Contrary to what one might assume, the idea of a data standard does not mean imposing a single norm on data format, architecture or protocol. Each sector has already worked on ontologies to help facilitate dialogue between data sets. ‘Our intention is not to provide yet another ontology’, explains Antoine Garnier, project head at IDSA. ‘What we are offering is more of a meta-model which enables a description of data sets based on those sector ontologies, and with an agnostic approach in terms of the sectors it targets.’

This standard could be seen as a list of conditions on which to base data use. To summarise the conditions in IDSA’s architectural model, ‘the three cornerstones are the inter-operability, certification and governance of data’, says Antoine Garnier. Thanks to this approach, the resulting standard serves as a guarantee of quality between players. It enables users to determine rapidly whether an organisation fulfils these conditions and is thus trustworthy. This system also raises the question of security, which is one of the primary concerns of organisations who agree to open their data.

Europe, the great lake region of data?

While developing a standard is a step forward in technical terms, it remains to be put into actual use. For this, its design must incorporate the technical, legal, economic and political concerns of European data stakeholders – producers and users alike. Hence the importance of creating a community consisting of as many organisations as possible. In Europe, since 2020, this community has had a name, Gaia-X, an association of players, including together IMT and IDSA in particular, to structure efforts around the federation of data, software and infrastructure clouds. Via Gaia-X, public and private organisations aim to roll out standardisation actions, using the IDSA standard among others, but may also implement research, training or awareness activities.

‘This is such a vast issue that if we want to find a solution, we must approach it through a community of experts in security, inter-operability, governance and data analysis’ Olivier Boissier points out, emphasising the importance of dialogue between specialists around this topic. Alongside their involvement in Gaia-X, IMT and IDSA are organising a winter school from 2 to 4 December to raise awareness among young researchers of data-sharing issues (see insert below). With the support of the German-French Academy for the Industry of the Future, it will provide the keys to understanding technical and human issues, through concrete cases. ‘Within the research community, we are used to taking part in conferences to keep up to date on the state of play of our field, but it is difficult to have a deeper understanding of the problems faced by other fields’, Sylvain Le Corff admits. ‘This type of Franco-German event is essential to structuring the European community and forming a global understanding of an issue, by taking a step back from our own area of expertise.’ 

The European Commission has made no secret of its ambition to create a space for the free circulation of data within Europe. In other words, a common environment in which personal and confidential data would be secured, but also in which organisations would have easy access to a significant amount of industrial data. To achieve this idyllic scenario of cooperation between data players, the collective participation of organisations is an absolute prerequisite. For academics, the communitarian approach is a core practice and does not represent a major challenge. For businesses, however, there remains a certain number of stakeholders to win over. The majority of major industries have understood the benefits of data sharing, ‘but some companies still see data as a monetizable war treasure that they must avoid sharing’, says Antoine Garnier. ‘We must take an informative approach and shatter preconceived ideas.’

Read on I’MTech: Data sharing: an important issue for the agricultural sector

What about non-European players? When we speak about data sharing, we systematically refer to the cloud, a market cornered by three American players, Amazon, Microsoft and Google, behind which we find other American stakeholders (IBM and Oracle) and a handful of Chinese interests such as Alibaba and Tencent. How do we convince these ‘hyper-scalers’ (the title refers to their ability to scale up to meet growing demand, regardless of the sector) to adopt a standard which is not their own, when they are the owners of the technology upon which the majority of data use is based? ‘Paradoxically, we are perhaps not such bad news for them’ Antoine Garnier assures us. ‘Along with this standard, we are also offering a form of certification. For players suffering from a negative image, this allows them to demonstrate compliance with the rules.’

This standardisation strategy also impacts European digital sovereignty and the transmission of its values. In the same way as Europe succeeded in imposing a personal data protection standard in the 2010s with the formalisation of the GDPR, it is currently working to define a standard around industrial data sharing. Its approach to this task is identical, i.e. to make standardisation a guarantee of security and responsible management. ‘A standard is often perceived as a constraint, but it is above all a form of freedom’ concludes Olivier Boissier. ‘By adopting a standard, we free ourselves of the technical and legal constraints specific to each given use.’

[box type=”info” align=”” class=”” width=””]From 2 to 4 December: a winter school on data sharing

Around the core theme of Data Analytics & AI, IMT and TU Dortmund are organising a winter school on data sharing for industrial systems, from 2 to 4 December 2020, in collaboration with IDSA, the German-French Academy for the Industry of the Future and with the support of the Franco-German University. Geared towards doctoral students and young researchers, its aim is to open perspectives and establish a state of play on the question of data exchange between European stakeholders. Through the participation of various European experts, this winter school will examine the technical, economic and ethical aspects of data sharing by bringing together the field expertise of researchers and industrial players.

Information and registration

[/box]

Dagobah

DAGOBAH: Tables, AI will understand

Human activities produce massive amounts of raw data presented in the form of tables. In order to understand these tables quickly, EURECOM and Orange are developing DAGOBAH, a semantic annotation platform. It aims to develop a generic solution that can optimize AI applications such as personal assistants, and facilitate the management of complex data sets of any company.

 

On a day-to-day basis, online keyword searches often suffice to make up for our thousands of memory lapses, clear up any doubts we may have or satisfy our curiosity. The results even anticipate our needs by offering more information than we asked for: a singer’s biography, a few song titles, upcoming concert dates etc. But have you ever wondered how the search engine always provides an answer to your questions? In order to display the most relevant results, computer programs must understand the meaning and nuances of data (often in the form of tables) so that they can answer users’ queries. This is one of the key goals of the DAGOBAH platform, created through a partnership between EURECOM and Orange research teams in 2019.

DAGOBAH’s aim is to automatically understand the tabular data produced by humans. Since there is a lack of explicit context for this type of data – compared to a text – understanding it depends on the reader’s knowledge. “Humans know how to detect the orientation of a table, the presence of headings or merging lines, relationships between columns etc. Our goal is to teach computers how to make such natural interpretations,” says Raphaël Troncy, a data science researcher at Eurecom.

The art of leveraging encyclopedic knowledge

After identifying a table’s form, DAGOBAH tries to understand its content. Take two columns, for example. The first lists names of directors and the second, film titles. How does DAGOBAH go about interpreting this data set without knowing its nature or content? It performs a semantic annotation, which means that it effectively applies a label to each item in the table. To do so, it must determine the nature of a column’s content (directors’ names etc.) and the relationship between the two columns. In this case: director – directed – film. But an item may mean different things. For example, “Lincoln” refers to a last name, a British or American city, the title of a Steven Spielberg film etc. In short, the platform must resolve any ambiguity about the content of a cell based on the overall context.

To achieve its goal, DAGOBAH searches existing encyclopedic knowledge bases (Wikidata, DBpedia). In these bases, knowledge is often formalized and associated with attributes: “Wes Anderson” is associated with “director.” To process a new table, DAGOBAH compares each item to its database and proposes possible candidates for attributes: “film title”, “city” etc. But they must remain simply candidates. Then, for each column, the candidates are grouped together and put to a majority vote. The nature being sought is therefore deduced with a varying degree of probability.

However, there are limitations to this method when it comes to complex tables. Beyond applications for the general public, industrial data may contain statistics related to business-specific knowledge or highly specialized scientific data that is difficult to identify.

Neural networks to the rescue  

To reduce the risk of ambiguity, DAGOBAH uses neural networks and a word embedding technique. The principle: represent a cell’s content in the form of a vector in multidimensional space.  Within this space, vectors of two words that are semantically close to one another are grouped together geometrically in the same place. Visually speaking, the directors are grouped together, as are the film titles. Applying this principle to DAGOBAH is based on the assumption that items in the same column must be similar enough to form a coherent whole. “To remove ambiguity between candidates, categories of candidates are grouped together in vector space. The problem is then to select the most relevant group in the context of the given table,” explains Thomas Labbé, a data scientist at Orange. This method becomes more effective than a simple search with a majority vote when there is little information available about the context of a table.

However, one of the drawbacks of using deep learning is the lack of visibility about what happens inside the neural network. “We change the hyperparameters, turning them like oven dials to obtain better results. The process is highly empirical and takes a long time since we repeat the experiment over and over again,” explains Raphaël Troncy. The approach is also time-consuming in terms of computing time. The teams are also working on scaling up the process. As such, Orange’s dedicated big data infrastructures are a major asset.  Ultimately, the researchers seek to implement an all-purpose approach, created in an end-to-end way and which is generic enough to meet the needs of highly diverse applications.

Towards industrial applications

The semantic interpretation of tables is a goal but not an end. “Working with EURECOM allows us to have almost real-time knowledge about the latest academic advances as well as an informed opinion on the technical approaches we plan to use,” says Yoan Chabot, a researcher in artificial intelligence at Orange. DAGOBAH’s use of encyclopedic data makes it possible to optimize question/response engines in the kind of natural language used by voice assistants. But the holy grail will be to provide an automatic processing solution for business-specific knowledge in an industrial environment. “Our solution will be able to address the private sector market, not just the public sector, for internal use by companies who produce massive amounts of tabular data,” adds Yoan Chabot.

This will be a major challenge, since industry does not have knowledge graphs to which DAGOBAH may refer. The next step will therefore be to succeed in semantically annotating data sets using knowledge bases in their embryonic stages. To achieve their goals, for the second year in a row the academic and industry partners have committed to take part in an international semantic annotation challenge, a very popular topic in the scientific community. For four months, they will have the opportunity to test their approach in real-life conditions and will compare their results with the rest of the international community in November.

To learn earn more: DAGOBAH: Make Tabular Data Speak Great Again

Anaïs Culot for I’MTech

Photographie de l'océan

Smarter models of the ocean

The ocean is a system that is difficult to observe, whose biodiversity and physical phenomena we still know very little about. Artificial intelligence could be an asset in understanding this environment better. Ronan Fablet, a researcher at IMT Atlantique, presents the projects of the new Océanix Research Chair. What is the objective? To use AI to optimize models for observing the ocean.

 

More than 70 % of the surface area of our planet is occupied by oceans and seas. They make up a colossal system that we know little about. The TARA expedition discovered hundreds of millions of previously unknown species of plankton, as our ability to explore the ocean floor remains limited. This is also the case with observing physical phenomena such as the dynamics of ocean currents for example, on the surface or at depth.

And yet, understanding ocean dynamics is essential for a good understanding of ecological aspects, biodiversity and ecosystems. But unlike the atmosphere, which can be observed directly, it is difficult to study the ocean. Space technologies offer some visibility of the ocean surface, including surface currents and winds, but can see nothing below. In addition, orbiting satellites capture images as they pass over certain areas but cannot provide instantaneous observation of the entire globe, and the presence of clouds can obscure the visibility of the oceans. As for beacons and buoys, some of these recover information up to 2,000 meters deep, but this remains very occasional.

Using AI to see the unknown

No observation system can provide a high-resolution image of the oceans all around the globe, everywhere and all the time,” says Ronan Fablet, signal and communications researcher at IMT Atlantique. And even decades from now I don’t think that will be possible, if we use only physical observations.” The solution is artificial intelligence: AI could make it possible to optimize observation systems and reconstruct missing data based on the observed data. Ronan Fablet launched the Océanix chair at IMT Atlantique in order to investigate this further, in collaboration with numerous institutional partners (CNES, École Navale, ENSTA Bretagne, Ifremer, IRD, ESA) and industrial partners (Argans, CLS, e-odyn, ITE-FEM, MOi, Microsoft, NavalGroup, ODL, OceanNext, Scalian).

Machine learning is a way of estimating parameters to get the best prediction of an unknown, for example at a time in the future. This works like image recognition models: “We could feed the model a lot of pictures of dogs, for example, so that it learns to recognize them,” Ronan Fablet explains. The difference here is that we’re working on systems with larger dimensions, and images of the ocean.”

Take the example of an oil spill. To find out how the oil will drift through the ocean after a spill, researchers use simulations based on physical models related to fluid dynamics. “These models are either difficult to solve or difficult to calibrate, and may require unknown parameters,” he says. Machine learning techniques should make it possible to develop digital models that are more compact, and therefore faster in simulation. This would make it easier to simulate the physical processes involved in the drift of an oil slick.

Read on I’MTech: Marine oil pollution detected from space

This also applies to obtaining better representations of climate variability, which involves very broad temporalities. “The objective is to use the data available today, and to couple it with machine learning techniques to find the missing information, to better understand the situation tomorrow”.

A better view of sea routes

Model optimization and data reconstruction are also of great interest in vessel traffic monitoring. Possible applications are the detection of abnormal behavior, such as a fishing vessel changing course or stopping; or the illegal behavior of a vessel entering a restricted area. “It is unimaginable to equip an entire maritime route as we would a motorway to monitor traffic. Observation is therefore based on other space technologies,” says the researcher.

In the field of maritime traffic, there are two main types of information: AIS (Automatic Identification System) signals and satellite imagery. Every shipping vessel is required to transmit an AIS signal to locate it, but vessels smuggling cargo usually turn off this signal. Among other things, satellite imagery allows us to observe whether or not the vessels that have navigated in an area have transmitted or not, by comparing the image with AIS signals.

This type of study on abnormal behavior related to AIS signals was the subject of the ANR Astrid Sesame project. “We have applied specific neural networks to learning data, particularly in western Brittany, to learn what normal ship behavior is,” says Ronan Fablet. The aim is then to identify behaviors that deviate from the norm, even if they are infrequent or of very low probability. An abnormal event would then send an alert to a monitoring software to determine whether specific actions are required, such as sending a patrol.

Applications of artificial intelligence in oceanography are developing more significantly today as the ability to link neural networks and mathematical models used in oceanography becomes more explicit and easier to implement. The Oceanix research chair at IMT Atlantique brings together institutions specialized in aspects of artificial intelligence and others more focused on oceanography.

Some teams have been working together for several years, such as Ifremer with IMT Atlantique. These studies will make it possible to provide answers where analytical models cannot, and to speed up calculations considerably. Ronan Fablet adds that “the Holy Grail for our teams would be to identify new laws for physical, biogeochemical or ecological processes. To be able to identify new models directly from the data – representations corresponding to a general rule”.

 

Tiphaine Claveau

 

digital simulation

In the midst of a crisis, hospitals are using digital simulation to organize care

Thierry Garaix and Raksmey Phan are systems engineering researchers at Mines Saint-Étienne[1]. In response to the current health crisis, they are making digital simulation and digital twins available to health services to inform their decision-making. This assistance is crucial to handling the influx of patients in hospitals and managing the post-peak period of the epidemic.

 

The organization of the various departments within a hospital is a particular concern in the management of this crisis. Based on the number of incoming patients and how many of them require special care, certain departments must be turned into dedicated wards for Covid-19 patients. Hospitals must therefore decide which departments they can afford to close in order to allocate beds and resources for new patients. “We’re working on models to simulate hospitalizations and intensive care units,” says Thierry Garaix, a researcher in healthcare systems engineering at Mines Saint-Étienne.

“Cardiac surgery operating rooms are already equipped with certain resources needed for Covid wards, such as respirators,” explains the researcher. This makes them good candidates for receiving Covid patients in respiratory distress. These simulations give caregivers a clearer view in order to anticipate the need for hospital and intensive care beds. “At the peak of the epidemic, all possible resources are reassigned,” he explains. “Once the peak has passed, the number of cases admitted to the hospital begins to drop, and the hospital must determine how to reallocate resources to the usual activities.”

Visualizing the hospital

It is essential for hospitals to have a good understanding of how the epidemic is evolving in order to define their priorities and identify possibilities. Once the peak has passed, fewer new patients are admitted to the hospital every day but those that remain still require care. These simulations make it possible to anticipate how long these departments will remain occupied by Covid patients and estimate when they will be available again.

“The tool I’m developing makes it possible to visualize how the flow of Covid patients will progress over time to help the university hospital make decisions,” says Thierry Garaix. The researcher provides the model with data about the length of hospital stays, time spent in the hospital or intensive care unit and the capacity of each hospital unit. The model can then digitally simulate patient pathways and visualize flows throughout the hospital. “It’s important to understand that the progression isn’t necessarily linear,” he adds, emphasizing that “if we see a drop in the number of cases, we have to consider the possibility that there could then be a rise in the epidemic.”

But even if a hospital unit could be freed up and reallocated to its regular activities, it may be more cautious to keep it available to handle new cases. “At the beginning of the epidemic, health services had to rush to allocate resources and set up Covid units quickly,” says Thierry Garaix. “The benefit of these simulations is that they make it easier to anticipate the management of resources, so that resources can be allocated gradually depending on how the epidemic evolves.”

“Strictly speaking, it is not a digital twin since the model does not directly interact with reality,” says the researcher. “But if a digital twin of all of the hospital’s departments had been available, it would have been of great help in planning how resources should be allocated at the beginning of the epidemic.” 

Visualizing people

A digital twin could help assess a number of complex aspects, including the effects of isolation on the health of elderly people. “It’s a project we’ve been working on for a while, but it has taken on new importance in light of the lockdown measures,” says Raksmey Phan, who is also a healthcare systems researcher at Mines Saint-Étienne. The AGGIR scale is generally used to measure an individual’s  loss of autonomy. It breaks health status into different categories   ̶̶  autonomous, at risk, fragile, dependent   ̶̶  in order to propose appropriate care. The digital twin would be used to anticipate changes in health status, identify at-risk individuals and prevent them from moving towards a situation of dependence.

“It’s important to point out that a fragile individual can, with appropriate physical activity, return to a category corresponding to a better health status. However, once an individual enters into a situation of dependence,  there’s no going back,” explains Raksmey Phan. The aim of this new digital twin project is to predict this progression in order to propose appropriate activities before it is too late. At present, the lack of physical activity as a result of the lockdown raises the risk of adverse health outcomes since it implies a loss of mobility.

In the context of lockdown, this digital twin therefore makes it possible to estimate the impact of lack of physical activity for elderly people. Before the lockdown period, researchers installed sensors in homes of volunteers, on doors, objects such as refrigerators, front doors etc. to evaluate their presence and level of activity at home. “With fairly simple sensors, we have a model that is well-aligned with reality and is effective for measuring changes in an individual’s health status,” he adds.

These sensors evaluate the time spent in bed, on the couch, or indicate if, on the other hand, individuals spend a lot of time standing up, moving around, or if he leaves the house often. With this data, the digital twin can extrapolate new data about a future situation, and therefore predict how an individual’s health status will progress over time. “The goal is essentially to analyze troubling changes that may lead to a risk of fragility, and react in order to prevent this from occurring,” explains the researcher.

The researchers, who are working with the insurance company EOVI MCD, could then propose appropriate activities to maintain good health. Even in the midst of a pandemic, and taking social distancing measures and an effort to limit contact into account, it is possible to propose activities to be done at home, in front of the TV for example. “The insurance provider could propose activities and home services or potentially direct them to a retirement home,” says Thierry Garaix. “The key focus is providing an opportunity to act before it’s too late by estimating the future health status of the individuals  concerned, and reacting by proposing appropriate structures or facilities,” say the two researchers.

[1] Thierry Garaix and Raksmey Phan are researchers at the Laboratory of Informatics, Modeling and Optimization of Systems (LIMOS), a joint research unit between Mines Saint-Étienne/CNRS/University of Clermont-Auvergne.

 

Tiphaine Claveau