libertés, data

Health crisis, state of emergency: what safeguards are there for our fundamental freedoms?

This article originally appeared (in French) in newsletter no. 17 of the VP-IP Chair, Data, Identity, Trust in the Digital Age for April 2020.

[divider style=”dotted” top=”20″ bottom=”20″]

The current pandemic and unprecedented measures taken to slow its spread provide an opportunity to measure and assess the impact of digital technology on our societies, including in terms of its legal and ethical contradictions.

While the potential it provides, even amid a major crisis, is indisputable, the risks of infringements on our fundamental freedoms are even more evident.

Without giving in to a simplistic but unrealistic ad hoc techno-solutionism, it would seem appropriate to observe the current situation by looking back forty years into the past, at a time when there was no internet or digital technology at our sides to soften the shock and provide a rapid global response.

The key role of digital technology during the health crisis

Whether to continue economic activity with remote work and remote conferences or to stay in touch with family and friends, digital technology, in its diverse uses, has proved to be invaluable in these exceptional times.

Without such technology, the situation would clearly have been much worse, and the way of life imposed upon us by the lockdown even harder to bear. It also would have been much more difficult to ensure outpatient care and impossible to provide continued learning at home for primary, secondary and higher education students.

The networks and opportunities for remote communication it provides and the knowledge it makes available are crucial assets when it comes to structuring our new reality, in comparison to past decades and centuries.

This health crisis requires an urgent response and therefore serves a brutal reminder of the importance of research in today’s societies, marked by the appearance of novel, unforeseeable events, in a time of unprecedented globalization and movement of people.

Calls for projects launched in France and Europe – whether for research on testing, immunity, treatments for the virus and vaccines that could be developed – also include computer science and humanities components. Examples include aspects related to mapping the progression of the epidemic, factors that can improve how health care is organized and ways to handle extreme situations.

Digital technology has enabled all of the researchers involved in these projects to continue working together, thinking collectively and at a fast pace. And through telemedicine (even in its early stages), it has provided a way to better manage, or at least significantly absorb, the current crisis and its associated developments.

In economic terms, although large swaths of the economy are in tatters and the country’s dependence on imports has made certain situations especially strained – as illustrated by the shortage of protective masks, a topic that has received wide media coverage – other sectors, far from facing a crisis, have seen a rise in demand for their products or services. This has been the case for companies in the telecommunications sector and related fields like e-commerce.

Risks related to personal data

While the crisis has led to a sharp rise in the use of digital technology, the present circumstances also present clear risks as far as personal data is concerned.

Never before has there been such a constant flow of data, since almost all the information involved in remote work is stored on company servers and passed through third party interfaces from employees’ homes. And never before has so much data about our social lives, family and friends been made available to internet and telecommunications companies since – with the exception of those with whom we are spending the lockdown period – more than ever, all of our communication depends on networks.

This underscores the potential offered by the dissemination and handling of the personal data that is now being generated and processed, and consequently, the potential danger, at both the individual and collective level, should it be used in a way that does not respect the basic principles governing data processing.

Yet, adequate safeguards are not yet in place, since the social contract relating to this area is still being developed and debated.

Companies that do not comply with the GDPR have not changed their practices [1] and the massive use of new forms of online connection continue to create risks, given the sensitive nature of the data that may be or is collected.

Examples include debates about the data protection policy for medical consultation interfaces and issuing prescriptions online [2]; emergency measures to enable distance learning via platforms whose data protection policies have been criticized or are downright questionable (Collaborate or Discord, which has been called “spyware” by some [3] to name just a few of many examples); the increased use of video conferencing, for which some platforms do not offer sufficient safeguards in terms of personal data protection, or which have received harsh criticism following an examination of their capacity for cybersecurity and for protecting the privacy of the information exchanged.

For Zoom, notably, which is very popular at the moment, it has reportedly been “revealed that the company shared information about some of its users with Facebook and could discreetly find users’ LinkedIn profiles without their knowledge[4].

There are also more overall risks, relating to how geolocation could be used, for example. The CNIL (the French Data Protection Authority) [5], the European Data Protection Committee [6] and the European Data Protection Supervisor [7] have given precise opinions on this matter and their advice is being sought at the moment.

In general, mobile tracking applications, such as the StopCovid application being studied by the government [8], and the issue of aggregation of personal data require special attention. This topic has been widely covered by the French [9], European [10] and international [11] media. The CNIL has called for vigilance in this area and has published recommendations [12].

The present circumstances are exceptional and call for exceptional measures – but these measures must only infringe on our freedom of movement, right to privacy and personal data protection rights with due respect for our fundamental principles: necessity of the measure, proportionality, transparency, and loyalty, to name just a few.

The least intrusive solutions possible must be used. In this respect, Ms Marie-Laure Denis, President of the CNIL, explains, “Proportionality may also be assessed with regard to the temporary nature, related solely to the management of the crisis, of any measure considered” [13].

The exceptional measures must not last beyond these exceptional circumstances. They must not be left in place for the long term and chip away at our fundamental rights and freedoms. We must be particularly vigilant in this area, as the precedent for measures adopted for a transitional period in order to respond to exceptional circumstances (the Patriot Act in the United States, the state of emergency in France) has unfortunately shown that these measures have been continued – with doubts as to their necessity – and some have been established in ordinary law provisions and have therefore become part of our daily lives [14].

[divider style=”dotted” top=”20″ bottom=”20″]

Claire Levallois-Barth, Lecturer in Law at Télécom Paris, Coordinator of the VP-IP Chair (Personal Information Values and Policies)

Maryline Laurent, Computer Science Professor at Télécom SudParis and Co-Founder of the VP-IP Chair

Ivan Meseguer, European Affairs, Institut Mines-Télécom, Co-Founder of the VP-IP Chair

Patrick Waelbroeck, Professor of Industrial Economics and Econometrics at Télécom Paris, Co-Founder of the VP-IP Chair

Valérie Charolles, Philosophy Researcher at Institut Mines-Télécom Business School, member of the VP-IP Chair, Associate Researcher at the Interdisciplinary Institute of Contemporary Anthropology (EHESS/CNRS)

 

phishing

Something phishy is going on!

Cyberattacks have been on the rise since the month of March 2020. Hervé Debar, an information systems security researcher at Télécom SudParis, talked to us about the relationship between cyberattacks – such as phishing and zoom-bombing  – and the Covid-19 health crisis.

 

For some, the crisis brought about by Covid-19 has created opportunities: welcome to the world of cyberattacks. The month of March saw a significant rise in online attacks, in part due to increased reliance on e-commerce and digital working practices, such as video conferences. “Such attacks include zoom-bombing,”  says Hervé Debar, an information systems security researcher at Télécom SudParis.

Zoom-Bombing

Zoom is a video conference platform that has become widely used for communication and to facilitate remote work. “It’s a system that works really well,” says the researcher, “although it’s controversial since it’s hosted in the United States and raises questions about compliance, in particular with the GDPR.”

Zoom-bombing is a way of attacking users of such platforms by hijacking meetings. “There are real problems with the use of collaborative software because you have to install a client, so you run the risk of leaving the door to your computer open,” explains Hervé Debar.

Zoom-bombers seek to make a disturbance, but may also potentially try to spy on users, even if “a lot of power is needed for a malicious third party to hijack a desired meeting.”  These virtual meetings are defined by IDs – sets of characters of varying lengths. In order to try to hijack a meeting, a hacker generates IDs at random in the hope of finding an active meeting.

“This means that there is little likelihood of finding a specific meeting in order to spy on users,” says Hervé Debar. “That being said, arriving uninvited in an active meeting at random to make trouble is easier in our current circumstances, since there are a much greater number of meetings.”  An algorithm could be used to generate these valid tags. This works like a robot calling set numbers: it calls numbers on a continual basis and if someone picks up on the other end, it hands the call over to an operator.

It is worth noting that Zoom has taken certain cybersecurity aspects into account for its services and is making efforts to provide appropriate solutions. To lower the risk, meetings can also be protected by using an access code. Besides zoom-bombing, more traditional attacks are well-suited to the current health crisis. One such attack is phishing.

For what purpose?

The goal of phishing is to find a convincing bait to get the recipient of an email to click on a link and want to take further action. “Phishing techniques have gone from selling Viagra a few years ago to selling masks or other medical products,” says Hervé Debar. “This reflects people’s needs. The more worried they are, the more vulnerable they are to this kind of attack.” The fear of getting sick, coupled with a shortage of available protective equipment, can therefore increase the risk of these types of practices.

You get an e-mail saying: “We have masks in stock! Buy X masks for X euros.” So you pay for your order but never receive it.“It’s fraud, plain and simple,” says Hervé Debar. But such practices may also take a more indirect form, by asking for your credit card number or other sensitive personal information. This information may be used directly or sold to a third party. Messages, links and videos can also contain links to download malware that is then installed on the user’s computer, or a malicious application for smartphones.

Recently, this type of email has started using a new approach. Hervé Debar says that “the message is worded as a response, as if the person receiving it had placed an order with their supplier.”  The goal is to build trust by making people think they know the sender, therefore making them more likely to fall for the scam.

From an economic viewpoint, unfortunately, such practices are profitable. “Even if very few people actually become victims, the operational cost is very low,” explains the researcher. “It works the same way as telephone scams and premium-rate numbers.”

Uncertainty about information sources tends to amplify this phenomenon. In these anxious times, this can be seen in the controversy over chloroquine. “The doubts surrounding this information make it conducive to phishing campaigns, especially since it has been difficult to get people to understand that it may be dangerous.”

How can we protect ourselves?

“Vigilance and common sense are needed to react appropriately and protect ourselves from phishing attacks,” says Hervé Debar, adding that “knowing suppliers well and having a good overview of inventory would be the best defense.”  For hospitals and healthcare facilities, the institutional system ensures that they are familiar with their suppliers and know where to place orders safely.

“We also have to keep in mind that these products must meet a certain quality level,” adds Hervé Debar. “It would be surprising if people could just magically produce them. We know that there is a major shortage of such supplies, and if governments are struggling to obtain them, it shouldn’t be any easier for individuals.”

Information technology assists with logistical aspects in healthcare institutions – for example inventory and transporting patients – and it’s important for these institutions to be able to maintain communication. So they may be targeted by attacks, in particular attacks on services that seek to saturate networks. “There have been attacks on services at Paris hospitals, which have been effectively blocked. It seems like a good idea to limit exterior connections and bandwidth usage.”

Tiphaine Claveau for I’MTech

energy consumption

The worrying trajectory of energy consumption by digital technology

Fabrice Flipo, Institut Mines-Télécom Business School

[divider style=”normal” top=”20″ bottom=”20″]

[dropcap]I[/dropcap]n November, the General Council for the Economy, Industry, Energy and Technology (CGEIET) published a report on the energy consumption of digital technology in France. The study draws up an inventory of equipment and lists consumption, making an estimate of the total volume.

The results are rather reassuring, at first glance. Compared to 2008, this new document notes that digital consumption seems to have stabilized in France, as have employment and added value in the sector.

The massive transformations currently in progress (growth in video uses, “digitization of the economy”, increased platform use, etc.) do not seem to be having an impact on the amount of energy expended. This observation can be explained by improvements in energy efficiency and the fact that the increase in consumption by smartphones and data centers has been offset by the decline in televisions and PCs. However, these optimistic conclusions warrant further consideration.

61 million smartphones in France

First, here are some figures given by the report to help understand the extent of the digital equipment in France. The country has 61 million smartphones in use, 64 million computers, 42 television sets, 6 million tablets, 30 million routers. Although these numbers are high, the authors of the report believe they have greatly underestimated the volume of professional equipment.

The report predicts that in coming years, the number of smartphones (especially among the elderly) will grow, the number of PCs will decline, the number of tablets will stabilize and screen time will reach saturation (currently at 41 hours per week).

Nevertheless, the report suggests that we should remain attentive, particularly with regard to new uses: 4K then 8K for video, cloud games via 5G, connected or driverless cars, the growing numbers of data centers in France, data storage, and more. A 10% increase in 4K video in 2030 alone would produce a 10% increase in the overall volume of electricity used by digital technology.

We believe that these reassuring conclusions must be tempered, to say the least, for three main reasons.

Energy efficiency is not forever

The first is energy efficiency. In 2001, the famous energy specialist Jonathan Koomey established that computer processing power per joule doubles every 1.57 years.

But Koomey’s “law” is the result of observations over only a few decades: an eternity, on the scale of marketing. However, the basic principle of digital technology has remained the same since the invention of the transistor (1947): using the movement of electrons to mechanize information processing. The main cause of the reduction in consumption is miniaturization.

Yet, there is a minimum threshold of physical energy consumption required to move an electron, which is known as “Landauer’s principle”. In technological terms, we can only get close to this minimum.  This means that energy efficiency gains will slow down and then stop. The closer technology comes to this minimum, the more difficult progress will be. In some ways, this brings us back to the law of diminishing returns established by Ricardo two centuries ago, on the productivity of land.

The only way to overcome the barrier would be to change the technological paradigm: to deploy the quantum computer on a large scale, as its computing power is independent of its energy consumption. But this would represent a massive leap and would take decades, if it were to happen at all.

Exponential data growth

The second reason why the report’s findings should be put into perspective is the growth in traffic and computing power required.

According to the American IT company Cisco, traffic is currently increasing tenfold every 10 years. If we follow this “law”, it will be multiplied by 1,000 within 30 years. Such data rates are currently impossible: the 4G copper infrastructure cannot handle them. 5G and fiber optics would make such a development possible, hence the current debates.

Watching a video on a smartphone requires digital machines – phones and data centers – to execute instructions to activate the pixels on the screen, generating and changing the image. The uses of digital technology thus generate computing power, that is, a number of instructions executed by the machines. The computing power required has no obvious connection with traffic. A simple SMS can just as easily trigger a few pixels on an old Nokia or a supercomputer, although of course the power consumption will not be the same.

In a document dating back to a few years ago, the semiconductor industry developed another “law”: the steady growth in the amount of computing power required on a global scale. The study showed that at this rate, by 2040 digital technology will require the total amount of energy produced worldwide in 2010.

This result applies to systems with the average performance profile of 2015, when the document was written. The study also considers the hypothesis of global equipment with an energy efficiency that is 1,000 times higher. Maturity would only be shifted by 10 years: 2050. If the entire range of equipment reached the limit of “Landauer’s principle”, which is impossible, then by 2070 all of the world’s energy (from 2010) would be consumed by digital technology.

Digitization without limits

The report does not say that energy-intensive uses are limited to the practices of a few isolated consumers. They also involve colossal industrial investments, justified by the desire to use the incredible “immaterial” virtues of digital technology.

All sectors are passionate about AI. The future of cars seems destined to turn towards autonomous vehicles. Microsoft is predicting a market of 7 billion online players. E-sport is growing. Industry 4.0 and the Internet of Things (IoT) are presented as irreversible developments. Big data is the oil of tomorrow, etc.

Now, let us look at some figures. Strubell, Ganesh & McCallum have shown, using a common neural network used to process natural language, that training consumed 350 tons of CO₂, equivalent to 300 round trips from New York to San Francisco and back. In 2016, Intel announced that the autonomous car would consume 4 petabytes per day. Knowing that in 2020 a person generates or transits 2 GB/day, this represents 2 million times more. The figure announced in 2020 is instead 1 to 2 TB/hour, which is 5,000 times more than individual traffic.

A surveillance camera records 8 to 15 frames per second. If the image is 4 MB, this means 60 MB/s, without compression, or 200 GB/hour, which is not insignificant in the digital energy ecosystem. The IEA EDNA report highlights this risk. The “volumetric video”, based on 5K cameras, generates a flow of 1 TB every 10 seconds. Intel believes that this format is “the future of Hollywood”!

In California, online gambling already consumes more than the power required for electric water heaters, washing machines, dishwashers, clothes dryers and electric stoves.

Rising emissions in all sectors

All this for what, exactly? This brings us to the third point. How does digital technology contribute to sustainable development? To reducing greenhouse gas emissions? To saving soils, biodiversity, etc.?

The Smart 2020 report promised a 20% reduction in greenhouse gases in 2008, thanks to digital technology. In 2020 we see that this has not happened. The ICT sector is responsible for 3% of global greenhouse gas emissions, which is more or less what the report predicted. But for the other sectors, nothing has happened: while digital technology has spread widely, emissions are also on the rise.

However, the techniques put forward have spread: “intelligent” engines have become more popular, the logistics sector relies heavily on digital technology, and soon artificial intelligence, not to mention the widespread use of videoconferencing, e-commerce and orientation software in transport. Energy networks are controlled electronically. But the reductions have not happened. On the contrary, no “decoupling” of emissions from economic growth is in sight, neither from a greenhouse gas perspective nor in other parameters, such as consumption of materials. The OECD predicts that material consumption will almost triple by 2060.

Rebound effect

The culprit, says the Smart 2020 report, is the “rebound effect”. This is based on the “Jevons paradox” (1865), which states that any progress in energy efficiency results in increased consumption.

A curious paradox. The different forms of “rebound effect” (systemic, etc.) are reminiscent of something we already know: they can take productivity gains, as found for example by Schumpeter or even Adam Smith (1776).

A little-known article also shows that in the context of neoclassical analysis, which assumes that agents seek to maximize their gains, the paradox becomes a rule, according to which any efficiency gain that is coupled with an economic gain always translates into growth in consumption. Yet, the efficiency gains mentioned so far (“Koomey’s law”, etc.) generally have this property.

A report from General Electric illustrates the difficulty very well. The company is pleased that the use of smart grids allows it to reduce CO2 emissions and save money; the reduction in greenhouse gas is therefore profitable. But what is the company going to do with those savings? It makes no mention of this. Will it reinvest in consuming more? Or will it focus on other priorities? There is no indication. The document shows that the general priorities of the company remain unchanged, it is still a question of “satisfying needs” which are obviously going to increase.

Digital technology threatens the planet and its inhabitants

Deploying 5G without questioning and regulating its uses will therefore open the way to all these harmful applications. The digital economy may be catastrophic for climate and biodiversity, instead of saving them. Are we going to witness the largest and most closely-monitored collapse of all time? Elon Musk talks about taking refuge on Mars, and the richest people are buying well-defended properties in areas that will be the least affected by the global disaster. Because global warming threatens agriculture, the choice will have to be made between eating and surfing. Those who have captured value with digital networks are tempted to use them to escape their responsibilities.

What should be done about this? Without doubt, exactly the opposite of what the industry is planning: banning 8K or, failing that, discouraging its use, reserving AI for restricted uses with a strong social or environmental utility, limiting the drastic level of power required by e-sport, not deploying 5G on a large scale, ensuring a restricted and resilient digital infrastructure with universal access that allows low-tech uses and low consumption of computing and bandwidth to be maintained. Favoring mechanical systems or providing for disengageable digital technology, not rendering “backup techniques” inoperable. Becoming aware of what is at stake. Waking up.

[divider style=”dotted” top=”20″ bottom=”20″]

This article benefited from discussions with Hugues Ferreboeuf, coordinator of the digital component of the Shift Project

Fabrice Flipo, Professor of social and political philosophy, epistemology and history of science and technology at Institut Mines-Télécom Business School

This article was republished from The Conversation under the Creative Commons license. Read the original article here.

[box type=”info” align=”” class=”” width=””]

This article has been published in the framework of the 2020 watch cycle of the Fondation Mines-Télécom, dedicated to sustainable digital technology and the impact of digital technology on the environment. Through a monitoring notebook, conferences and debates, and science promotion in coordination with IMT, this cycle examines the uncertainties and issues that weigh on the digital and environmental transitions.

Find out more on the Fondation Mines-Télécom website

[/box]

5G-Victori

5G-Victori: large-scale tests for vertical industries

Twenty-five European partners have joined together in the three-year 5G-Victori project launched in June 2019. They are conducting large-scale trials for advanced use case validation in commercially relevant 5G environments. Navid Nikaein, researcher at EURECOM, key partner of the 5G-Victori project, details the challenges here.

 

What was the context for developing the European 5G-Victori project?

Navid Nikaein: 5G-Victori stands for VertIcal demos over Common large scale field Trials fOr Rail, energy and media Industries. This H2020 project is funded by the European Commission as part of the 3rd phase of the 5GPPP projects (5G Infrastructure Public Private Partnership). This phase aims to validate use cases for vertical industry applications on realistic and commercially relevant 5G test environments. 5G-Victori focuses on use cases involving in Transportation, Energy, Media, and Factories of the Future.

What is the aim of this project?

NN: The aim is threefold. First, the integration of different 5G operational environments required for the demonstration of the large variety of 5G-Victori vertical and cross-vertical use cases. Second, testing the four main use-cases, namely Transportation, Energy, Media, and Factories of future, on 5G platforms located in Sophia Antipolis (France), Athens (Greece), Espoo and Oulu (Finland), allowing partners to validate their 5G use cases in view of a wider roll-out of services. Third, the transformation of the current closed, purposely developed and dedicated infrastructures into open environments where resources and functions are exposed to the telecom and the vertical industries through common repositories.

What technological and scientific challenges do you face?

NN: A number of challenges have been identified for each use cases that will be tackled during the course of the project in their relevant 5G environment (see figure below).

In the Transport use case, we validate the sustainability of critical services, such as collision avoidance, and enhanced mobile broadband applications, such a 4K video streaming, under high-speed mobility in Railway environments. Main challenges considered in 5G-Victori are (a) the interconnection of on-board devices with the trackside and the trackside with the edge and/or core network (see figure below), and (b) guaranteed delivery of railway-related critical data and signalling services addressing on-board and trackside elements using a common software-based platform.

In the Energy use case, the main challenge is to facilitate the smart energy metering, fault detection, and preventive maintenance taking advantage of the low latency signal exchange between the substations and the control Center over 5G networks. Both high-voltage and low-voltage energy operations are considered.

In the Media use case, the main challenge is to enable divers content delivery networks capable of providing services in dense, static and mobile environments. In particular, 4K video streaming service continuity in mobile scenarios with 5G network coverage, and bulk transfer of large volumes of content for the disconnected operation for personalized Video on Demand (VoD) services.

In the Factories of future use case, the main challenge is the design and development of a fully automated Digital Utility Management system over a 5G network demonstrating advanced monitoring solutions. Such a solution shall be able to track all the operations, detect equipment fault, (3) support decision-making process of first responders based on the collected data.

How are EURECOM researchers contributing to this project?

NN: EURECOM is one of the key partners in this project as it will provide its operational 5G testing facilities based on OpenAirInterface (OAI) and Mosaic5G platforms. The facility provides Software-defined Networks (SDN), Network Function Virtualization (NFV) and Multi-access Edge Computing (MEC) solutions for 5G networks. In addition, Eurecom will design and develop a complete 5G network slicing solution that will be used to deploy a virtualized 5G network tailored to the above-mentioned use cases. Finally, Eurecom will pre-validate a subset of scenarios considered in the project on.

Also read on I’MTech SDN and virtualization: more intelligence in 5G networks

Who are your partners and what are your collaborations?

NN: The project counts 25 European partners that are represented in the figure below: SMEs, network operators, vendors, academia… EURECOM is playing a key role in the project in that it provides (a) 5G technologies through OpenAirInterface and Mosaic5G platforms to a subset of partners, and (b) 5G deployment and testing facilities located at Sophia Campus.

What are the expected benefits of the project?

NN: In addition to the scientific benefits in terms of publications, the project will provide supports in continuous development and maintenance of the OpenAirInterface and Mosaic5G software platforms. It also allow us to validate whether 5G network is able to deliver the considered use cases with the expected performance. We also plan to leverage our results by providing feedbacks when possible to the standardization bodies such as 3GPP and ORAN.

What are the next important steps for the project?

NN: In the first year, the project focused on refining the 5G architecture and software platforms to enable efficient execution of the considered use cases. In the 2nd year, the project will focus on deploying the use cases on the target 5G testing facilities provided by 5G-EVE, 5G-VINNI, 5GENESIS, and 5G-UK.

Learn more about the 5G-Victori project

Interview by Véronique Charlet for I’MTech

digital technologies

Digital technologies: three major categories of risks

Lamiae Benhayoun, Institut Mines-Télécom Business School and Imed Boughzala, Institut Mines-Télécom Business School

[divider style=”normal” top=”20″ bottom=”20″]

[dropcap]T[/dropcap]o respond to an environment of technological disruption, companies are increasingly taking steps toward digital transformation or digitalization, with spending on such efforts expected to be USD 2 trillion in 2022.

Digitalization reflects a profound, intentional restructuring of companies’ capabilities, resources and value-creating activities in order to benefit from the advantages of digital technology. This transformation, driven by the advent of  SMAC technologies (social, mobile, analytics, cloud), has intensified with the development of DARQ technologies (distributed ledger, artificial intelligence, extended reality, quantum calculation), which are pushing companies toward a post-digital era.

DARQ New Emerging Technologies (Accenture, February 2019).

 

There are clear benefits to the use of these technologies – they help companies improve the user experience, streamline business processes and even revolutionize their business models. Being a digital-first business has become a prerequisite for survival in an ever-changing marketplace.

But the adoption of these digital technologies gives rise to risks that must be managed to ensure a successful digital transformation. As such, the TIM department (technology, information and management) at Institut Mines-Télécom Business School is conducting a series of research studies on digital transformation, which has brought to light three categories of risks related to the use of digital technologies.

This characterization of risks draws on a critical review of research studies on this topic over the past decade and on insights from a number of digital transformation professionals in sectors with varying degrees of technological intensity.

Risks related to data governance

Mobile digital technologies and social media lead to the generation of data without individuals’ knowledge. Collecting, sharing and analyzing this data represents a risk for companies, especially when medical, financial or other sensitive data is involved. To cite one example, a company in the building industry uses drones to inspect the facades of buildings.

Drones for Construction (Bouygues Construction, February 2015).

 

It has noted that these connected objects can be intrusive for citizens and present risks of non-compliance for the company in terms of data protection. In addition, our exploration of the healthcare industry found that this confidentially problem may even hinder collaboration between care providers and developers of specialized technologies.

Furthermore, many companies are confronted with an overwhelming amount of data, due to poor management of generation channels and dissemination flow. This is especially a problem in the banking and insurance industry. A multinational leader in this sector has pointed out that cloud technology can be useful for managing this data mining cycle, but at the same time, it poses challenges in terms of data sovereignty.

Risks related to relationships with third parties

Digital technologies open companies up to other stakeholders (customers, suppliers, partners etc.), and therefore to more risks in terms of managing these relationships. A maritime logistics company, for example, said that its use of blockchain technology to establish smart contracts has led to a high degree of formality and rigidity in its customer-supplier relationships.

In addition, social media technologies must be used with caution, because they can lead to problems in terms of overexposure and lack of visibility. This was the case for a company in the agri-food industry,  which found itself facing viral social media sharing of bad reviews by customers. And a fashion industry professional emphasized that mobile technologies present risks when it comes to establishing an effective customer relationship because it is difficult for the company to stand out in relation to mobile applications and e-commerce sites, which can even confuse customers and make them skeptical.

Furthermore, many companies in the telecommunications and banking-insurance sectors interviewed by the researchers are increasingly aware of the risks related to the advent of blockchain technology for their business models and their roles across the socio-economic landscape.

Risks related to managing digital technologies

The recent nature of digital technologies presents challenges for the majority of companies. Chief information officers (CIO) must master these technologies quickly to respond to fast-changing business needs, or they will find themselves facing shadow IT problems (systems implemented without approval from the CIO), as occurred at an academic institution we studied. In this case, several departments implemented readily available solutions to meet their transformation needs, which created a problem in terms of IT infrastructure governance.

It is not always easy to master digital technologies quickly, especially since the development of digital skills can be slowed down when there is a shortage of experts, as was the case for a company in the logistics sector. Developing these skills requires significant investments in terms of time, efforts and costs, which may prove to be useless in just a short time, due to the quick pace at which digital technologies evolve and become obsolete. This risk is especially present in the military sector, where digital technologies are tailor-made and must ensure a minimum longevity to amortize the development costs,  as well as in agriculture, given the great vulnerability of the connected objects used  in the sector.

Furthermore, some management problems are associated with digital technologies in particular. We have identified the recurrent risk of loss of assets in cases where companies use a cloud provider, and the risk of irresponsible innovation following a banking-insurance firm’s adoption of artificial intelligence technology. Lastly, a number of professionals underscored the potential risks of mimicry and oversizing that may emerge with the imminent arrival of quantum technologies.

These three categories of risks highlight the issues related to digital technologies in particular, as well as the challenges of the interconnected nature of these technologies and their simultaneous use.  It is crucial that those who work with such technologies are made aware of these risks to be anticipated in order to benefit from their investments in digital transformation. For this transformation goes beyond operational considerations and presents opportunities, but also risks associated with change.

[divider style=”dotted” top=”20″ bottom=”20″]

Lamiae Benhayoun, Assistant Professor at Institut Mines Telecom Business School (IMT BS), Institut Mines-Télécom Business School and Imed Boughzala, Dean of the Faculty and Professor, Institut Mines-Télécom Business School

This article has been republished from the The Conversation under a Creative Commons license. Read the original article (in French).

Subcultron

The artificial fish of the Venice lagoon

The European H2020 Subcultron project was completed in November 2019 and successfully deployed an autonomous fleet of underwater robots in the Venice lagoon. After four years of work, the research consortium  — which includes IMT Atlantique ­— has demonstrated the feasibility of synchronizing a swarm of over one hundred autonomous units in a complex environment. An achievement made possible by the use of robots equipped with a bio-inspired sixth sense known as an “electric sense.”

 

Curious marine species inhabited the Venice lagoon from April 2016 to November 2019. Nautical tourists and divers were able to observe strange transparent mussels measuring some forty centimeters, along with remarkable black lily pads drifting on the water’s surface. But amateur biologists would have been disappointed had they made the trip to observe them, since these strange plants and animals were actually artificial.  They were robots submerged in the waters of Venice as part of the European H2020 Subcultron project. Drawing on electronics and biomimetics, the project’s aim was to deploy an underwater swarm of over 100 robots, which were able to coordinate autonomously with one another by adapting to the environment.

To achieve this objective, the scientists taking part in the project chose Venice as the site for carrying it out. “The Venice lagoon is a sensitive, complex environment,” says Frédéric Boyer, a robotics researcher at IMT Atlantique — a member of the Subcultron research consortium. “It has shallow, very irregular depths, interspersed with all sorts of obstacles. The water is naturally turbid. The physical quantities of the environment vary greatly: salinity, temperature etc.” In short, the perfect environment for putting the robots in a difficult position and testing their capacity for adaptation and coordination.

An ecosystem of marine robots

As a first step, the researchers deployed 130 artificial mussels in the lagoon.  The mussels were actually electronic units encapsulated in a watertight tube. They were able to collect physical data about the environment but did not have the ability to move, other than sinking and resurfacing. Their autonomy was ensured by an innovative charging system developed by one of  the project partners: the Free University of Brussels. On the surface, the floating “lily pads” powered by solar energy were actually data processing bases. There was just one problem: the artificial mussels and lily pads could not communicate with one another. That’s where the notion of coordination and a third kind of robots came into play.

In the turbid waters of the Venice lagoon, artificial fish were responsible for transmitting environmental data from the bottom of the lagoon to the surface.

 

To send information from the bottom of the lagoon to the surface, the researchers deployed some fifty robotic fish. “They’re the size of a big sea bream and are driven by small propellers, so unlike the other robots, they can move,” explains Frédéric Boyer. This means that there is only a single path for transmitting data between the bottom of the lagoon and the surface: the mussels transmit information to the fish who swim towards the surface to deliver it to the lily pads, and then return to the mussels to start the process over again.  And all of this takes place in a variable marine environment, where the lily pads drift and the fish have to adapt.

Fish with a sixth sense

Developing this autonomous robot ecosystem was particularly difficult . “Current robots are developed with a specific goal, and are rarely intended to coordinate with other robots with different roles,” explains Frédéric Boyer. Developing the artificial fish, which played a crucial role, was therefore the biggest challenge of the project. The IMT Atlantique team contributed to these efforts by providing expertise on a bio-inspired sense: electric sense.

It’s a sense found in certain fish that live in the waters of tropical forests,” says the researcher. “They have electrosensitive skin, which allows them to measure the distortions of electric fields produced by themselves or others in their immediate environment: another fish passing nearby causes a variation that they can feel. This means that they can stalk their prey or detect predators in muddy water or at night.” The artificial fish of the turbid Venice lagoon were equipped with this electric sense.

This capacity made it possible for the fish to engage in organizational, cooperative behaviors. Rather than each fish looking for the mussels and the lily pads on their own, they grouped together and travelled in schools. They were therefore better able to detect variations in the electric field, whether under the water or on the surface, and align themselves in the right direction. “It’s a bit like a compass that aligns itself with the Earth’s electromagnetic field,” says Frédéric Boyer.

The Subcultron project therefore marked two important advances in the field of robotics: the coordination of a fleet of autonomous agents and equipping under-water robots with a bio-inspired sense. These advances are of particular interest for monitoring ecosystems and the marine environment. One of the secondary aims of the project, for example, was tracking the phenomenon of oxygen depletion in the water of the Venice lagoon. An event that occurs at irregular intervals, in an unpredictable manner, which leads to local mortality of aquatic species. Using the data they measured, the swarm of underwater robots successfully demonstrated that it is possible to forecast this phenomenon more effectively. In other words, an artificial ecosystem for the benefit of the natural ecosystem.

Learn more about Subcultron

[box type=”info” align=”” class=”” width=””]

The Subcultron project was officially launched in April 2015 as part of the Horizon 2020 research program . It was coordinated by the University of Graz, in Austria. It brought together IMT Atlantique in France, along with partners in Italy (the Pisa School of Advanced Studies, and the Venice Lagoon Research Consortium), Belgium (the Free University of Brussels), Croatia (the University of Zagreb), and Germany (Cybertronica).

[/box]

données massives, big data

Big data and personal information: a scientific revolution?

This article was originally published (in French) on the website for IMT’s Values and Policies of Personal Information Chair.

[divider style=”normal” top=”20″ bottom=”20″]

[dropcap]O[/dropcap]n 15 November 2019, Valérie Charolles was the keynote speaker at a symposium organized by the University of Insurance on the theme, “Data : a (r)evolution for insurance?” For this sector, where big data has changed some horizons, but which has been working with data for a long time, this first keynote sought to provide a philosophical perspective on current developments in information processing.

Does the advent of big data mark a break with the previous ways of handling information, in particular personal information?  Does it represent a true scientific revolution?  This question has been discussed in scientific, philosophical and intellectual debates ever since Chris Anderson’s thought-provoking article for the American publication Wired in 2008. In the article, he proclaimed “the end of theory,” which has been made “obsolete” by the “data deluge” and concluded with this intentionally provocative statement: “It’s time to ask: What can science learn from Google?”

Just over a decade after its publication, at a time when what we now know as “big data,” combined with “deep learning,” is used on a massive scale, the Débat journal chose to explore the topic by devoting a special report in its November-December 2019 issue to “the consequences of big data for science” As such, it called on philosophers from a variety of backgrounds (Daniel Andler, Emeritus Professor in the philosophy of science at Université Paris-Sorbonne, Valérie Charolles, philosophy researcher at Institut Mines-Télécom Business School, Jean-Gabriel Ganascia, professor at Université Paris-Sorbonne and Chairman of the CNRS Ethics Committee) as well as a physicist, (Marc Mézard, who is also the director of ENS), asking them to assess Chris Anderson’s thesis. Engineer and philosopher Jean-Pierre Dupuy had shared his thoughts on the subject in May, in the journal Esprit.

Big data and scientific models

The authors of these articles acknowledge the contributions of big data processing on a scientific level (although Jean-Pierre Dupuy and Jean-Gabriel Ganascia express a certain skepticism in this regard). This sort of processing makes it possible to develop scientific models that are more open and which, through successive aggregations of layers of correlated information, may give rise to forms of connections, links. Although this machine learning by what are referred to as deep networks has existed for over 70 years, its implementation is still relatively recent. It has been made possible by the large amount of information now collected and the computing power of today’s  computers. This represents a paradigm shift in computer science. Deep learning clearly provides scientists with a powerful tool, but, unlike Chris Anderson, none of the above authors see it as a way to replace scientific models developed from theories and hypotheses.

There are many reasons for this. Since they predict the future based on the past, machine learning models are not made for extreme situations and can make mistakes or produce false correlations. In 2009, the journal Nature featured an article on Google Flu Trends, which, by combining search engine query data, was able to predict the peak of the flu epidemic two weeks before the national public health agency. But in 2011, Google’s algorithm performed less well than the agency’s model that relied on human expertise and collected data. The relationships revealed by the algorithms represented correlations rather than causalities, and the phenomena revealed must still be explained using a scientific approach.  Furthermore, the algorithms themselves work with the hypotheses (part of their building blocks) they are given by those who develop them, and other algorithms, if applied to the same data set, would produce different results.

Algorithmic processing of personal data

In any case, even if it does not represent a paradigm shift, the use of big data attests to a new, more inductive scientific style, where data plays an increasingly important role (we often hear the term “data-driven” science). Yet ready-to-be-analyzed raw data does not exist. Daniel Andler elaborates extensively on this point, which is also evoked by the other authors. The information with which computers are provided must be verified and annotated in order to become data that can be used by algorithms in a meaningful way. And these algorithms do not work by themselves, without any human intervention.

When personal data is involved, this point is especially important, as underscored by Valérie Charolles. To begin with, the limitations cited above in terms of the results provided by the algorithms also clearly apply to personal data processing. Furthermore, individuals cannot be reduced to the information they can provide about themselves using digital tools, even if a considerable about of information is provided.  What’s more, the quantity of information does not presuppose its quality or relevance, as evidenced by Amazon’s hiring algorithm that systematically discriminated against women simply due to the fact that they were underrepresented in the database. As Marc Mésard concludes, “we must therefore be vigilant and act now to impose a regulatory framework and essential ethical considerations.”

[divider style=”dotted” top=”20″ bottom=”20″]

Valérie Charolles, philosophy researcher at Institut Mines-Télécom Business School, member of IMT’s Values and Policies of Personal Information Chair associate researcher at the Interdisciplinary Institute of Contemporary Anthropology (EHESS/CNRS)

 

portabilité, portability

Data portability: Europe supports research players in this field

The right to data portability, introduced by the GDPR, allows individuals to obtain and reuse their personal data across different services. Launched in November 2019 for a period of three years, the European DAPSI project promotes advanced research on data portability by supporting researchers and tech SMEs and start-ups working in this field. The IMT Starter incubator is one of the project partners. IMT Starter business manager Augustin Radu explains the aim of DAPSI below.

 

What was the context for developing the DAPSI project?

Augustin Radu: Since the entry into force of the GDPR (General Data Protection Regulation) in 2018, all citizens have had the right to obtain, store and reuse personal data for their own purposes. The right to portability gives people more control over their personal data. It also creates new development and innovation opportunities by facilitating personal data sharing in a secure manner, under the control of the person involved.

What is the overall goal of the project?

AR: The Data Portability and Services Incubator (DAPSI) will empower internet innovators to develop human-centric technology solutions, meaning web technologies that can boost citizens’ control over data (privacy by design), trust in the internet and web decentralization, etc.

The goal is to develop new solutions in the field of data portability. The DAPSI project aims to allow citizens to transmit all the data stored by a service provider directly to another service provider, responding to the challenge of personal data portability on the internet, as provided for by the GDPR.

How will you achieve this goal?

AR: DAPSI will support up to 50 teams as part of a ten-month incubation program during which experts from various fields will provide an effective work methodology, access to cutting-edge infrastructure, training in business and data sectors, coaching, mentoring, visibility, as well as investment and a strong community. In addition, each DAPSI team will receive up to €150K in equity-free funding, which represents a total of €5.6 M through the three open calls.

How is IMT Starter contributing to the project?

AR: IMT Starter, in partnership with Cap Digital, will be in charge of this ten-month incubation program. In concrete terms, the selected projects will have access to online training sessions and one-to-one coaching sessions.

Who are your partners in this project?

AR: IMT Starter is managing a project led by project leader Zabala (Spain) along with four other European partners : F6S (United Kingdom), Engineering (Italy), Fraunhofer (Germany) and Cap Digital (France).

What are the expected benefits of DAPSI?

AR: This initiative aims to develop a more human-centric internet based on the values of openness, cross-border cooperation, decentralization and privacy protection. The primary objective is to allow users to regain control in order to increase trust in the internet. This should lead to more transparent services with more intelligence, greater engagement and increased user participation, therefore fostering social innovation.

What are some important steps for the project?

AR: The first call has been launched end of Februray. Anyone with an innovative project in the portability field might submit an application.

Learn more about DAPSI

Interview by Véronique Charlet for I’MTech

cryptography

Taking on quantum computers

What if quantum computers, with their high computing power, were already available: what would happen? How would quantum computing transform communications and the way they are encrypted? Romain Alléaume, a researcher at Télécom Paris, talks to us about his research for the future of cryptography.

 

A hypothetical quantum computer with its high computing power would be like a sword of Damocles to current cryptography. It would be strong enough to be able to decrypt a great number of our  secure communications, in particular as they are implemented on the internet. “It poses a threat in terms of protecting secrets,” says Romain Alléaume, a researcher at Télécom Paris in quantum information and cryptography, who quickly adds that “such a computer does not yet exist.”

Read more on I’MTech: What is a quantum computer?

But a hypothetical threat to the foundations of digital security must not be taken lightly. It would seem wise to start thinking about cryptography techniques to respond to this threat as of today. The time required to develop, test and verify new algorithms must be taken into consideration when it comes to updating these techniques. “Furthermore, some secrets, in the diplomatic world, for example, need to be protected for long periods of time,” explains the researcher. We must plan to act now in order to be able to counter the threats that could materialize ten or twenty years from now.

The American National Institute of Standards and Technologies (NIST) launched a competition as part of an international call published in 2017. Its aim was to identify new cryptographic algorithms called post-quantum algorithms, which will replace those which are known to be vulnerable to a quantum computer,  such as the RSA algorithm for example, which is based on the difficulty of factorizing large numbers.

Between quantum and post-quantum

There are two quite different ways to consider implementing safe cryptography even in the event of an attack by a quantum computer: post quantum and quantum. The first relies on mathematical algorithms and computational hypotheses. It is the same principle used in the traditional cryptography implemented today, but uses mathematical problems that researchers have good reason to believe are difficult even for a quantum computer.

Quantum cryptography security, on the other hand, is not dependent on the computing power of the attacker: it relies on physical principles. The quantum key distribution system (QKD) makes it possible to exchange secrets by encoding information in the properties of light, such as the polarization or phase of single photons.

“QKD won’t replace traditional cryptography,” explains Romain Alléaume, “their use cases, as well as the limitations of their use, are very different in nature.  Let’s imagine that the attack is a car accident and cryptography is our safety system. We can think of traditional cryptography as the seatbelt, and quantum cryptography as the airbag. The latter is an additional safety feature for the critical functions that are not ensured by traditional cryptography.”

“The quality of the distributed secret with QKD provides a very high level of security which is not necessary for all communications,” adds the researcher, “but which can be crucial for increasing the security of critical functions.”

And it requires an optical communication infrastructure — typically fiber optic — but for now physical constraints limit its deployment.  Optical link attenuation and noise significantly limit the portion of optical networks where it is feasible to deploy the technology. As of now, quantum communications are limited to ranges of 100 to 200 km on special fibers.

One of the challenges is to enable the deployment of QKD on shared infrastructures, and co-integrate it with telecom equipment as much as possible. This is the topic of the CiViQ project, one of the projects currently being carried out at Télécom Paris. “The ultimate goal,” says the researcher, “would be to share the network so that it can cover both traditional and quantum cryptography.”

Towards hybrid cryptography

The preferred approach is therefore to work with a well thought-out combination of computational cryptography – which will become post-quantum in the near future – and quantum cryptography. Aimed at redefining the border between the two, this will make the deployment of quantum cryptography possible in more frequent cases.

Romain Alléaume and his team are working on the Quantum Computational Timelock (QCT), which relies on traditional cryptography assumptions and quantum cryptography technologies. It is both computational, to distribute an ephemeral secret, and quantum to encode information in a large quantum state, meaning with a great number of modes. “We’ve shown that with this hybrid hypothesis,  we can increase performance significantly, in terms of throughput and distance.”

The information exchanged is therefore locked for a short period of time, say one day. An important point is that this technique, if not broken the first day, will subsequently ensure long-term security. “The attacker won’t be able to learn anything about the information distributed,” says Romain Alléaume “regardless of his level of intelligence or computing power. As long as the model is verified and the protocols are built properly, we’ll have a perfect guarantee in the future.”

He reminds us that at present, “the challenge is to develop less expensive, safer techniques and to develop a real industrial cryptography system for quantum computing.” As part of the Quantum Communication Infrastructure (QCI) initiative led by the European Commission, the research team is studying ways to deploy quantum communication infrastructures at the industrial level. The OPENQKD project, in which Romain Alléaume and his team are taking part, is a groundbreaking project that will contribute to this European initiative by developing industry standards for public encryption keys.

[box type=”info” align=”” class=”” width=””]

The OPENQKD project

The OPENQKD project brings together multidisciplinary teams of scientists and professionals from 13 European countries, to reinforce Europe’s position in quantum communications. On the French side, project partners include Orange, Thalès Alenia Space, Thalès Six GTS, Nokia Bells Lab, Institut Mines-Télécom, CNRS and IXblue.

[/box]

réseaux, internet

How to prevent internet congestion during the lockdown

Hervé Debar, Télécom SudParis – Institut Mines-Télécom ; Gaël Thomas, Télécom SudParis – Institut Mines-Télécom ; Gregory Blanc, Télécom SudParis – Institut Mines-Télécom and Olivier Levillain, Télécom SudParis – Institut Mines-Télécom

[divider style=”normal” top=”20″ bottom=”20″]

[dropcap]T[/dropcap]he current health crisis has led to a rise in the use of digital services. Telework, along with school closures and the implementation of distance learning solutions (CNED, MOOCs, online learning platforms such as Moodle for example), will put additional strain on these infrastructures since all of these activities are carried out within the network. This raises concerns about overloads during the lockdown period. Across the internet, however, DNS server loads have not shown a massive increase in traffic, therefore demonstrating that internet use remains under control.

The internet is a network that is designed to handle the load. However, telework and distance learning will create an unprecedented load. Simple measures must therefore be taken to limit network load and make better use of the internet. Of course, these rules can be adapted depending on the tools you have at your disposal.

How do telecommunications networks work?

The internet network functions by sending packets between machines that are connected to it. An often-used analogy is that of the information highway. In this analogy, the information exchanged between machines of all kinds (computers, telephones and personal assistants, to name just a few) is divided into packets (small and large vehicles). Each packet travels through the network between a source and a destination. All current networks operate according to this principle: internet, wireless (wifi) and mobile (3G, 4G) networks etc.

The network must provide two important properties: reliability and communication speed.

Reliability ensures accurate communication between the source and the destination, meaning that information from the source is transmitted accurately to the destination. Should there be transmission errors, they are detected and the data is retransmitted. If there are too many errors, communication is interrupted. An example of this type of communication is email. The recipient must receive exactly what the sender has sent. Long packets are preferred for this type of communication in order to minimize communication errors and maximize the quantity of data transmitted.

Communication speed makes real-time communication possible. As such, the packets must all travel across the network as quickly as possible, and their crossing time must be roughly constant. This is true for voice networks (3G, 4G) and television. Should a packet be lost, its absence may be imperceptible. This applies to videos or sound, for example, since our brain compensates for the loss. In this case, it is better to lose a packet from time to time – this leads to communication of lower quality, but they remain useable in most cases.

Congestion problems

The network has a large overall capacity but it is limited for each of its components. When there is very high demand, certain components can become congested (routerslinks – primarily fiber todayservers). In such cases, the two properties (reliability and speed) can break down.

For communications that require reliability (web, email), the network uses the TCP protocol (TCP from the expression “TCP/IP”). This protocol introduces a session mechanism, which is implemented to ensure reliability. When a packet is detected as lost by its source, it is retransmitted until the destination indicates that it has arrived. This retransmission of packets exacerbates network congestion, and what was a temporary slowdown turns into a bottleneck. To put it simply, the more congested the network, the more the sessions resend packets. Such congestion is a well-known phenomenon during the ‘internet rush hour’ after work.

If the source considers that a communication has been subject too many errors, it will close the “session.” When this occurs, a great quantity of data may be lost, since the source and the destination no longer know much about the other’s current state.  The congestion therefore causes a wastage of capacity, even once it is over.

For communications that require speed (video, voice), the network instead uses the UDP protocol. Unfortunately, routers are often configured to reject this kind of traffic in the event of a temporary overload. This makes it possible to prioritize traffic using sessions (TCPemailweb). Losing a few packets in a video or voice communication is not a problem, but losing a significant amount can greatly affect the quality of the communication. Since the source and destination exchange only limited information about problems encountered, they may have the impression that they are communicating when this is not actually the case.

The following proposals aim to limit network load and congestion, in order to avoid a situation in which packets start to get lost. It should be noted that the user may be explicitly informed about this loss of packets, but this is not always the case. It may be observed following delays or a deterioration of communication quality.

What sort of communications should be prioritized in the professional sphere?

Professional use must prioritize connection time for exchanging emails or synchronizing files. But the majority of work should be carried out without being connected to the network, since for a great number of activities, there is no need to be connected.

The most crucial and probably most frequently-used tool is email. The main consequence of the network load may be the time it takes to send and transmit messages. The following best practices will allow you to send shorter, less bulky messages, and therefore make email use more fluid:

– Choose thick clients (Outlook, Thunderbird for example) rather than web-based clients (Outlook Web Access, Zimbra, Gmail for example) since using email in a browser increases data exchange. Moreover, using a thick client means that you do not always have to be connected to the network to send and receive emails.

– When responding to email, delete non-essential content, including attachments and signatures.

– Delete or simplify signatures, especially those that include icons and social media images.

– Send shorter messages than usual, giving preference to plain text.

– Do not add attachments or images that are not essential, and opt for exchanging attachments by shared disks or other services.

When it comes to file sharing, VPNs (for “Virtual Private Networks“) and cloud computing are the two main solutions. Corporate VPNs will likely be the preferred way to connect to company systems. As noted above, they should only be activated when needed, or potentially on a regular basis, but long sessions should be avoided as they may lead to network congestion.

Most shared disks can also be synchronized locally in order to work remotely. Synchronization is periodic and makes it possible to work offline, for example on office documents.

Keeping in touch with friends and family without overloading the network

Social media will undoubtedly be under great strain. Guidelines similar to those for email should be followed and photos, videos, animated GIFs and other fun but bulky content should only be sent on a limited basis.

Certain messages may be rejected by the network. Except in exceptional circumstances, you should wait for the load to ease before trying again.

Advertising represents a significant portion of web content and congests the network without benefits for the user. Most browsers can incorporate extensions (privacy badger) to delete such content automatically. Some browsers, such as Brave for example,  also offer this feature. In general, the use of these tools does not have an impact on important websites such as government websites.

Television and on-demand video services also place great strain on the network. When it comes to video, it is preferable to use TNT (terrestrial network) instead of boxes, which use the Internet. The use of VoD services should be limited, especially during the day, so as to give priority to educational and work applications. And a number of video services have limited their broadcast quality, which significantly reduces bandwidth consumption.

Cybercrime and security

The current crisis will unfortunately be used as an attack tool. Messages about coronavirus must be handled with caution. Such messages must be read carefully and care must be taken with regard to links they may contain if they do not lead to government websites. Attachments should not be opened. The Hoaxbuster website and the Décodeurs application by the Le Monde newspaper can be used to verify whether information is reliable.

At this time in which online meeting systems are extensively used, attention must be given to personal data protection.

The ARCEP (French regulator for telecom operators) provides guidelines for making the best use of the network. To best protect yourself from attacks, the ANSSI (French cybersecurity agency) rules for IT security are more important than ever at this time, when cybercriminalty may flourish.

[divider style=”dotted” top=”20″ bottom=”20″]

Hervé Debar, Head of the Telecommunications, Networks and Services Department at Télécom SudParis, Télécom SudParis – Institut Mines-Télécom; Gaël Thomas, Professor, Télécom SudParis – Institut Mines-Télécom ; Gregory Blanc, Associate Research Professor in cybersecurity, Télécom SudParis – Institut Mines-Télécom and Olivier Levillain, Associate Research Professor, Télécom SudParis – Institut Mines-Télécom

This article has been republished from The Conversation under a Creative Commons license. Read original article (in French).