IRON-MEN: augmented reality for operators in the Industry of the Future

I’MTech is dedicating a series of success stories to research partnerships supported by the Télécom & Société Numérique (TSN) Carnot Institute, which the IMT schools are a part of.

[divider style=”normal” top=”20″ bottom=”20″]

The Industry of the Future cannot happen without humans at the heart of production systems. To help operators adapt to the fast development of industrial processes and client demands, elm.leblanc, IMT and Adecam Industries have joined forces in the framework of the IRON-MEN project. The aim is to develop an augmented reality solution for human operators in industry.

 

Many production sites use manual processes. Humans are capable of a level of intelligence and flexibility that is still unattainable by industrial robots, an ability that remains essential for the French industrial fabric to satisfy increasingly specific, demanding and unpredictable customer and user demands.

Despite alarmist warnings about replacement by technology, humans must remain central to industrial processes for the time being. To enhance the ability of human operators, IMT, elm.leblanc and Adecam Industries have joined forces in the framework of the IRON-MEN project. The consortium will develop an augmented reality solution for production operators over a period of 3 years.

The augmented reality technology will be designed to help companies develop flexibility, efficiency and quality in production, as well as strengthen communication among teams and collaborative work. The solution developed by the IRON-MEN project will support users by guiding and assisting them in their daily tasks to allow them to increase their versatility and ability to adapt.

The success of such an intrusive piece of technology as an augmented reality headset depends on the user’s physical and psychological ability to accept it. This is a challenge that lies at the very heart of the IRON-MEN project, and will guide the development of the technology.

The aim of the solution is to propose an industrial and job-specific response that meets specific needs to efficiently assist users as they carry out manual tasks. It is based on an original approach that combines digital transformation tools and respect for the individual in production plants. It must be quickly adaptable to problems in different sectors that show similar requirements.

IMT will contribute its research capacity to support elm.leblanc in introducing this augmented reality technology within its industrial organization. Immersion, which specializes in augmented reality experiences, will develop the interactive software interface to be used by the operators. The solution’s level of adaptability in an industrial environment will be tested at the elm.leblanc production sites at Drancy and Saint-Thégonnec as well as through the partnership with Adecam Industrie. IRON-MEN is supported by the French General Directorate for Enterprises in the framework of the “Grands défis du numérique” projects.

When organizations respond to cyberattacks

Cyberattacks are a growing reality that organizations have to face up to. In the framework of the German-French Academy for the Industry of the Future, researchers at IMT and Technische Universität München (TUM) show that there are solutions to this virtual threat. In particular, the ASSET project is studying responses to attacks that target communication between smart objects and affect the integrity of computer systems. Frédéric Cuppens, a researcher in cybersecurity on this project at IMT Atlantique and coordinator of the Cybersecurity of critical infrastructures chair, explains the state-of-the-art defenses to respond to these attacks.

 

Cybersecurity is an increasingly pressing subject for a number of organizations. Are all organizations concerned?

Fréderic Cuppens: The number of smart objects is growing exponentially, including in different organizations. Hospitals, industrial systems, services and transport networks are examples of places where the Internet of Things plays a major role and which are becoming increasingly vulnerable in terms of cybersecurity. We have already seen attacks on smart cars, pacemakers, smart meters etc. All organizations are concerned. To take the case of industry alone, since it is one of our fields of interest at IMT Atlantique, these new vulnerabilities affect production chains and water treatment just as much as agricultural processes and power generation.

What attacks are most often carried out against this type of target?

FC: We have classified the attacks carried out against organizations in order to study the threats. There are lots of attacks on the integrity of computer systems, affecting their ability to function correctly. This is what happens when, for example, an attacker takes control of a temperature sensor to make it show an incorrect value, leading to an emergency shutdown. Then there are also lots of attacks against the availability of systems, which consist in preventing access to services or data exchange. This is the case when an attacker interferes with communication between smart objects.

Are there responses to these two types of attack?

FC: Yes, we are working on measures to put in place against these types of attack. Before going into detail, we need to understand that cybersecurity is composed of three aspects: protection, which consists for example in filtering communication or controlling access to prevent attack; defense, which detects when an attack is being made and provides a response to stop it; and lastly resilience which allows systems to continue operating even during an attack. The research we are carrying out against attacks targeting availability or integrity include all three components, with special focus on resilience.

Confronted with attacks against the availability of systems, how do you guarantee this resilience?

FC: To interfere with communication, all you need is a jamming device. They are prohibited in France, but it is not hard to get hold on one on the internet. A jammer interferes with communication on certain frequencies only, depending on the type of jamming device used. Some are associated with Bluetooth frequencies, others with Wi-Fi networks or GPS frequencies. Our approach to fighting against jammers is based on direct-sequence spread spectrum. The signal is “buried in noise” and is therefore difficult to detect with a spectrum analyzer.

Does that allow you to effectively block an attack by interference?

FC: This is a real process of resilience. We assume that, to interfere with the signal, the attacker has to find the frequency the two objects are communicating on, and we want to ensure this does not jeopardize communication. By the time the attacker has found the frequency and launched the attack, the spread code has been updated. This approach is what we call “moving target defense”, in which the target of the attack — the sequence of propagation— is regularly updated. It is very difficult for an attacker to complete their attack before the target is updated.

Do you use the same approach to fight against attacks on integrity?

FC: Sort of, but the problem is not the same. In this case, we have an attacker who is able to integrate data in a smart way so that the intrusion is not detected. Take, for example, a tank being filled. The attacker corrupts the sensor so that it tells the system that the tank is already full. He will thus be able to stop the pumps in the treatment station or distillery. We assume that the attacker knows the system very well, which is entirely possible. The attacks on Iranian centrifuges for uranium enrichment showed that an attacker can collect highly sensitive data on the functioning of an infrastructure.

How do you fight against an attacker who is able to go completely unnoticed?

FC: State-of-the-art security systems propose to introduce physical redundancy. Instead of having one sensor for temperature or water level, we have several sensors of different types. This means the attacker has to attack several targets at once. Our research proposes to go even further by introducing virtual redundancy. There would be an auxiliary system that simulates the expected functioning of the machines or structures. If the data sent by the physical sensors differs from the data from the virtual model, then we know something abnormal is happening. This is the principal of a digital twin that provides a reference value in real time. It is similar to the idea of moving target defense, but with an independent virtual target whose behavior varies dynamically.

This work is being carried out in partnership with Technische Universität München (TUM) in the framework of the ASSET project by the German-French Academy for the Industry of the Future. What does this partnership contribute from a scientific point of view?

FC: IMT Atlantique and TUM each bring complementary skills. TUM is more focused on the physical layers and IMT Atlantique focuses more on the communication and service layers. Mines Saint-Étienne is also contributing and collaborating with TUM on attacks on physical components. They are working together on laser attacks on the integrity of components. Each party offers skills that the other does not necessarily have. This complementarity allows solutions to be designed to fight against cyberattacks at different levels and from different points of view. It is crucial in a context where computer systems are becoming more complicated: countermeasures must follow this level of complexity. Dialogue between researchers with different skills stimulates the quality of the protection we are developing.

 

[divider style=”normal” top=”20″ bottom=”20″]

Renewal of the Cybersecurity and critical infrastructures chair (Cyber CNI)

Launched in January 2016 and after 3 years of operation, the Chair for the cybersecurity of critical infrastructures (Cyber CNI) is being renewed for another 3 years thanks to the commitment of its academic and industrial partners. The IMT chair led by IMT Atlantique benefits from partnerships with Télécom ParisTech and Télécom SudParis and support from the Brittany region – a region at the forefront of cutting-edge cybersecurity technology – in the framework of the Cyber Center of Excellence. In the context of a sponsor partnership led by Fondation Mines-Télécom, five industrial partners have committed to this new period: AIRBUS, AMOSSYS, BNP Paribas, EDF and Nokia Bell Labs. The official signing to renew the Chair took place at FIC (International Cybersecurity Forum) in Lille on 22 January 2019.

Read the news on I’MTech: Cyber CNI chair renewed for 3 years

[divider style=”normal” top=”20″ bottom=”20″]

The installation of a data center in the heart of a city, like this one belonging to Interxion in La Plaine Commune in Île-de-France, rarely goes unnoticed.

Data centers: when digital technology transforms a city

As the tangible part of the digital world, data centers are flourishing on the outskirts of cities. They are promoted by elected representatives, sometimes contested by locals, and are not yet well-regulated, raising new social, legal and technical issues. Here is an overview of the challenges this infrastructure poses for cities, with Clément Marquet, doctoral student in sociology at Télécom ParisTech, and Jean-Marc Menaud, researcher specialized in Green IT at IMT Atlantique.

 

On a global scale, information technology contributes to almost 2% of global greenhouse gas emissions, which is as much as civil aviation. In addition, “The digital industry consumes 10% of the world’s energy production” explains Clément Marquet, sociology researcher at Télécom Paristech, who is studying this hidden side of the digital world. The energy consumption required for infrastructure to run smoothly, under the guise of ensuring reliability and maintaining a certain level of service quality, is of particular concern.

With the demand for real-time data, the storage and processing of these data must be carried out where they are produced. This explains why data centers have been popping up throughout the country over the past few years. But not just anywhere. There are close to 150 throughout France. “Over a third of this infrastructure is concentrated in Ile-de-France, in Plaine Commune – this is a record in Europe. It ends up transforming urban areas, and not without sparking reactions from locals,” the researcher says.

Plaine Commune, a European Data Valley

In his work, Clément Marquet questions why these data centers are integrated into urban areas. He highlights their “furtive” architecture, as they are usually built “in new or refitted warehouses, without any clues or signage about the activity inside”. He also looks at the low amount of interest from politicians and local representatives, partly due to their lack of knowledge on the subject. He takes Rue Rateau in La Courneuve as an example. On one side of this residential street, just a few meters from the first houses, a brand-new data center was inaugurated at the end of 2012 by Interxion. The opening of this installation did not run smoothly, as the sociologist explains:

“These 1,500 to 10,000 m2 spaces have many consequences for the surrounding urban area. They are a burden on energy distribution networks, but that is not all. The air conditioning units required to keep them cool create noise pollution. Locals also highlight the risk of explosion due to the 568,000 liters of fuel stored on the roof to power the backup generator, and the fact that energy is not recycled in the city heating network. Across the Plaine Commune agglomeration, there are also concerns regarding the low number of jobs created locally compared with the property occupied. It is no longer just virtual.”

Because these data centers have such high energy needs, the debate in Plaine Commune has centered on the risk of virtual saturation in electricity. Data centers store more energy than they really consume, in order to deal with any shortages. This stored electricity cannot be put to other uses. And so, while La Courneuve is home to almost 27,000 inhabitants, the data center requires the equivalent of a city of 50,000 people. The sociologist argues that there was no consultation of the inhabitants when this building was installed. They ended up taking legal action against the installation. “This raises the question of the viability of these infrastructures in the urban environment. They are invisible and yet invasive”.

Environmentally friendly integration possible

One of the avenues being explored to make these data centers more virtuous and more acceptable is to integrate environmentally friendly characteristics, hooking them up to city heating networks. Data centers could become energy producers, rather than just consumers. In theory, this would make it possible to heat pools or houses. However, it is not an easy operation. In 2015 in La Courneuve, Interxion had announced that it would connect a forthcoming 20,000 m² center. They did not follow through, breaking their promise of a change in their practice. Connecting to the city heating network requires major, complicated coordination between all parties. The project was faced with reluctance by the hosts to communicate on their consumption. Also, hosts do not always have the tools required to recycle heat.

Another possibility is to optimize the energy performance of data centers. Many Green IT researchers are working on environmentally responsible digital technology. Jean-Marc Menaud, coordinator of the collaborative project EPOC (Energy Proportional and Opportunistic Computing systems) and director of the CPER SeDuCe project (Sustainable Data Center), is one of these researchers. He is working on the anticipation of consumption, or predicting the energy needs of an application, combined with anticipating electricity production. “Energy consumption by digital technologies is based on three foundations: one third is due to the non-stop operation of data centers, one third is due to the Internet network itself” he explains, and the last third comes down to user terminals and smart objects.

Read on I’MTech: Data centers, taking up the energy challenge

Since summer 2018, the IMT Atlantique campus has hosted a new type of data center, one devoted to research, and available for use by the scientific community. “The main objective of SeDuce is to reduce the energy consumed by the air conditioning system. Because in energy, nothing goes to waste, everything can be transformed. If we want the servers to run well, we have to evacuate the heat, which is at around 30-35°C. Air conditioning is therefore vital” he continues. “And in the majority of cases, air conditioning is colossal: for 100 watts required to run a server, another 100 are used to cool it down”.

How does SeDuCe work? The data center, with a 1,000-core or 50-server capacity, is full of sensors and probes closely monitoring temperatures. The servers are isolated from the room in airtight racks. This airtight confinement makes it possible to optimize cooling costs tenfold: for 100 watts used by the servers, only 10 watts are required to cool them. “Soon, SeDuCe will be powered during the daytime by solar panels. Another of our goals is to get users to adapt the way they work according to the amount of energy available. A solution that can absolutely be applied to even the most impressive data centers.” Proof that energy transition is possible via clouds too.

 

Article written by Anne-Sophie Boutaud, for I’MTech.

biais des algorithmes, algorithmic bias

Algorithmic bias, discrimination and fairness

David Bounie, Professor of Economics, Head of Economics and Social Sciences at Télécom ParisTech

Patrick WaelbroeckProfessor of Industrial Economy and Econometrics at Télécom ParisTech and co-founder of the Chair Values and Policies of Personal Information

[divider style=”dotted” top=”20″ bottom=”20″]

The original version of this article was published on the website of the Chair Values and Policies of Personal Information. This Chair brings together researchers from Télécom ParisTech, Télécom SudParis and Institut Mines Télécom Business School, and is supported by the Mines-Télécom Foundation.

[divider style=”dotted” top=”20″ bottom=”20″]

 

[dropcap]A[/dropcap]lgorithms rule our lives. They increasingly intervene in our daily activities – i.e. career paths, adverts, recommendations, scoring, online searches, flight prices – as improvements are made in data science and statistical learning.

Despite being initially considered as neutral, they are now blamed for biasing results and discriminating against people, voluntarily or not, according to their gender, ethnicity or sexual orientation. In the United States, studies have shown that African American people were more penalised in court decisions (Angwin et al., 2016). They are also discriminated against more often on online flat rental platforms (Edelman, Luca and Svirsky, 2017). Finally, online targeted and automated ads promoting job opportunities in the Science, Technology, Engineering and Mathematics (STEM) fields seem to be more frequently shown to men than to women (Lambrecht and Tucker, 2017).

Algorithmic bias raises significant issues in terms of ethics and fairness. Why are algorithms biased? Is bias unpreventable? If so, how can it be limited?

Three sources of bias have been identified, in relation to cognitive, statistical and economic aspects. First, algorithm results vary according to the way programmers, i.e. humans, coded them, and studies in behavioural economics have shown there are cognitive biases in decision-making.

  • For instance, a bandwagon bias may lead a programmer to follow popular models without checking whether these are accurate.
  • Anticipation and confirmation biases may lead a programmer to favour their own beliefs, even though available data challenges such beliefs.
  • Illusory correlation may lead someone to perceive a relationship between two independent variables.
  • A framing bias occurs when a person draws different conclusions from a same dataset based on the way the information is presented.

Second, bias can be statistical. The phrase ‘Garbage in, garbage out’ refers to the fact that even the most sophisticated machine will produce incorrect and potentially biased results if the input data provided is inaccurate. After all, it is pretty easy to believe in a score produced by a complex proprietary algorithm and seemingly based on multiple sources. Yet, if the data set based on which the algorithm is trained to learn to categorise or predict is partial or inaccurate, as is often the case with fake news, trolls or fake identities, results are likely to be biased. What happens if the data is incorrect? Or if the algorithm is trained using data from US citizens, who may behave much differently from European citizens? Or even, if certain essential variables are omitted? For instance, how might machines encode relational skills and emotional intelligence (which are hard to get for machines as they do not feel emotions), leadership skills or teamwork in an algorithm? Omitted variables may lead an algorithm to produce a biased result for the simple reason the omitted variables may be correlated with the variables used in the model. Finally, what happens when the training data comes from truncated samples or is not representative of the population that you wish to make predictions for (sample-selection bias)? In his Nobel Memorial Prize-winning research, James Heckman showed that selection bias was related to omitted-variable bias. Credit scoring is a striking example. In order to determine which risk category a borrower belongs to, algorithms rely on data related to people who were eligible for a loan in a particular institution – they ignore files of people who were denied credit, did not need a loan or got one in another institution.

Third, algorithms may bias results for economic reasons. Think of online automated advisors specialised in selling financial services. They can favour the products of the company giving the advice, at the expense of the consumer if these financial products are more expensive than the market average. Such situation is called price discrimination. Besides, in the context of multi-sided platforms, algorithms may favour third parties who have signed agreements with the platform. In the context of e-commerce, the European Commission recently fined Google 2.4bn euros for promoting its own products at the top of search results on Google Shopping, to the detriment of competitors. Other disputes have occurred in relation to the simple delisting of apps in search results on the Apple Store or to apps being downgraded in marketplaces’ search results.

Algorithms thus come with bias, which seems unpreventable. The question now is: how can bias be identified and discrimination be limited? Algorithms and artificial intelligence will indeed only be socially accepted if all actors are capable of meeting the ethical challenges raised by the use of data and following best practice.

Researchers first need to design fairer algorithms. Yet what is fairness, and which fairness rules should be applied? There is no easy answer to these questions, as debates have opposed researchers in social science and those in philosophy for centuries. Fairness is a normative concept, many definitions of which are incompatible. For instance, compare individual fairness and group fairness. One simple criterion of individual fairness is that of equal opportunity, the principle according to which individuals with identical capacities should be treated similarly. However, this criterion is incompatible with group fairness, according to which individuals of the same group, such as women, should be treated similarly. In other words, equal opportunity for all individuals cannot exist if a fairness criterion is applied on gender. These two notions of fairness are incompatible.

A second challenge faces companies, policy makers and regulators, whose duty it is to promote ethical practices – transparency and responsibility – through an efficient regulation of the collection and use of personal data. Many issues arise. Should algorithms be transparent and therefore audited? Who should be responsible for the harm caused by discrimination? Is the General Data Protection Regulation fit for algorithmic bias? How could ethical constraints be included? Admittedly they could increase costs for society at the microeconomic level, yet they could help lower the costs of unfairness and inequality stemming from an automated society that wouldn’t comply with the fundamental principles of unbiasedness and lack of systematic discrimination.

Read on I’MTech Ethics, an overlooked aspect on algorithms?

Using personalised services without privacy loss: what solutions does technology have to offer?

Online services are becoming more and more personalised. This transformation designed to satisfy the end-user might be seen as an economic opportunity, but also as a risk, since personalised services usually require personal data to be efficient. Two perceptions that do not seem compatible. Maryline Laurent and Nesrine Kaâniche, researchers at Telecom SudParis and members of the Chair Values and Policies of Personal Information, tackle this tough issue in this article. They give an overview of how technology can solve this equation by allowing both personalization and privacy. 

[divider style=”normal” top=”20″ bottom=”20″]

This article has initially been published on the Chair Values and Policies of Personal Information website.

[divider style=”normal” top=”20″ bottom=”20″]

 

[dropcap]P[/dropcap]ersonalised services have become a major stake in the IT sector as they require actors to improve both the quality of the collected data and their ability to use them. Many services are running the innovation race, namely those related to companies’ information systems, government systems, e-commerce, access to knowledge, health, energy management, leisure and entertaining. The point is to offer end-users the best possible quality of experience, which in practice implies qualifying the relevance of the provided information and continuously adapting services to consumers’ uses and preferences.

Personalised services offer many perks, among which targeted recommendations based on interests, events, news, special offers for local services or goods, movies, books, and so on. Search engines return results that are usually personalised based on a user’s profile and actually start personalising as soon as a keyword is entered, by identifying semantics. For instance, the noun ‘mouse’ may refer to a small rodent if you’re a vet, a stay mouse if you’re a sailor, or a device that helps move the cursor on a computer screen if you’re an Internet user. In particular, mobile phone applications use personalisation; health and wellness apps (e.g. the new FitBit and Vivosport trackers) can come in very handy as they offer tips to improve one’s lifestyle, help users receive medical care remotely, or warn them on any possible health issue they detect as being related to a known illness.

How is personalisation technologically carried out?

When surfing on the Internet and using mobile phone services or apps, users are required to authenticate. Authentication allows to connect their digital identity with the personal data that is saved and collected from exchanges. Some software packages also include trackers, such as cookies, which are exchanged between a browser and a service provider or even a third party and allow to track individuals. Once an activity is linked to a given individual, a provider can easily fill up their profile with personal data, e.g. preferences and interests, and run efficient algorithms, often based on artificial intelligence (AI), to provide them with a piece of information, a service or targeted content. Sometimes, although more rarely, personalisation may rely solely on a situation experienced by a user – the simple fact they are geolocated in a certain place can trigger an ad or targeted content to be sent to them.

What risks may arise from enhanced personalisation?

Enhanced personalisation causes risks for users in particular. Based on geolocation data only, a third party may determine that a user goes to a specialised medical centre to treat cancer, or that they often spend time at a legal advice centre, a place of worship or a political party’s local headquarters. If such personal data is sold on a marketplace[1] and thus made accessible to insurers, credit institutions, employers and lessors, their use may breach user privacy and freedom of movement. And this is just one kind of data. If these were to be cross-referenced with a user’s pictures, Internet clicks, credit card purchases and heart rate… What further behavioural conclusions could be drawn? How could those be used?

One example that comes to mind is price discrimination,[2] i.e. charging different prices for the same product or service to different customers according to their location or social group. Democracies can also suffer from personalisation, as the Cambridge Analytica scandal has shown. In April 2018, Facebook confessed that U.S. citizens’ votes had been influenced through targeted political messaging in the 2016 election.

Responsible vs. resigned consumers

As pointed out in a survey carried out by the Chair Values and Policies of Personal Information (CVPIP) with French audience measurement company Médiamétrie,[3] some users and consumers have adopted data protection strategies, in particular by using software that prevents tracking or enables anonymous online browsing… Yet this requires them to make certain efforts. According to their purpose, they either choose a personalised service or a generic one to gain a certain control over their informational profile.

What if technology could solve the complex equation opposing personalised services and privacy?

Based on this observation, the Chair’s research team carried out a scientific study on Privacy Enhancing Technologies (PETs). In this study, we list the technologies that are best able to meet needs in terms of personalised services, give technical details about them and analyse them comparatively. As a result, we suggest classifying these solutions into 8 families, which are themselves grouped into the following 3 categories:

  • User-oriented solutions. Users manage the protection of their identity by themselves by downloading software that allows them to control outgoing personal data.Protection solutions include attribute disclosure minimisation and noise addition, privacy-preserving certification,[4] and secure multiparty calculations (i.e. distributed among several independent collaborators).
  • Server-oriented solutions. Any server we use is strongly involved in personal data processing by nature. Several protection approaches focus on servers, as these can anonymise databases in order to share or sell data, run heavy calculations on encrypted data upon customer request, implement solutions for automatic data self-destruction after a certain amount of time, or Privacy Information Retrieval solutions for non-intrusive content search tools that confidentially return relevant content to customers.
  • Channel-oriented solutions. What matters here is the quality of the communication channel that connects users with servers, be it intermediated and/or encrypted, and the quality of the exchanged data, which may be damaged on purpose. There are two approaches to such solutions: securing communications and using trusted third parties as intermediators in a communication.

Some PETs are strictly in line with the ‘data protection by design’ concept as they implement data disclosure minimisation or data anonymisation, as required by Article 25-1 of the General Data Protection Regulation (GDPR).[5] Data and privacy protection methods should be implemented at the earliest possible stages of conceiving and developing IT solutions.

Our state of the art shows that using PETs raises many issues. Through a cross-cutting analysis linking CVPIP specialists’ different fields of expertise, we were able to identify several of these challenges:

  • Using AI to better include privacy in personalised services;
  • Improving the performance of existing solutions by adapting them to the limited capacities of mobile phone personalised services;
  • Looking for the best economic trade-off between privacy, use of personal data and user experience;
  • Determining how much it would cost industrials to include PETs in their solutions in terms of development, business model and adjusting their Privacy Impact Assessment (PIA);
  • PETs seen as a way of bypassing or enforcing legislation.

AI4EU: a project bringing together a European AI community

Projets européens H2020On January 10th, the AI4EU project (Artificial Intelligence for the European Union), an initiative of the European Commission, was launched in Barcelona. This 3-year project led by Thalès, with a budget of €20 million, aims to bring Europe to the forefront of the world stage in the field of artificial intelligence. While the main goal of AI4EU is to gather and coordinate the European AI community as a single entity, the project also aims to promote EU values: ethics, transparency and algorithmic explainability. TeraLab, the AI platform at IMT, is an AI4EU partner. Interview with its director, Anne-Sophie Taillandier.

 

What is the main goal of the AI4EU H2020 project?

Anne-Sophie Taillandier: To create a platform bringing together the Artificial Intelligence (AI) community and embodying European values: sovereignty, trust, responsibility, transparency, explainability… AI4EU seeks to make AI resources, such as data repositories, algorithms and computing power, available for all users in every sector of society and the economy. This includes everyone from citizens interested in the subject, SMEs seeking to integrate AI components, start-ups, to large groups and researchers—all with the goal of boosting innovation, reinforcing European excellence and strengthening Europe’s leading position in the key areas of artificial intelligence research and applications.

What is the role of this platform?

AST: It primarily plays a federating role. AI4EU, with 79 members in 21 EU countries, will provide a unique entry point for connecting with existing initiatives and accessing various competences and expertise pooled together in a common base. It will also play a watchdog role and will provide the European Commission with the key elements it needs to orient its AI strategy.

TeraLab, the IMT Big Data platform, is also a partner. How will it contribute to this project?

AST: Along with Orange, TeraLab coordinates the “Platform Design & Implementation” work package. We provide users with experimentation and integration tools that are easy to use without prior theoretical knowledge, which accelerates the start-up phase for projects developed using the platform. For common questions that arise when launching a new project, such as the necessary computing power, data security, etc., TeraLab offers well-established infrastructure that can quickly provide solutions.

Which use cases will you work on?

AST: The pilot use cases focus on public services, the Internet of Things (IoT), cybersecurity, health, robotics, agriculture, the media and industry. These use cases will be supplemented by open calls launched over the course of the project. These open calls will target companies and businesses that want to integrate platform components into their activities. They could benefit from the sub-grants provided for in the AI4EU framework: the European Commission funds the overall project, which itself funds companies proposing convincing project through the total dedicated budget of €3 million.

Ethical concerns represent a significant component of European reflection on AI. How will they be addressed?

AST: They certainly represent a central issue. The project governance will rely on a scientific committee, an industrial committee as well as an ethics committee that will ensure transparency, reproducibility and explainability by means of tools including charters, indicator and labels. Far from representing an obstacle to business development, the emphasis on ethics creates added value and a distinguishing feature for this platform and community. The guarantee that the data will be protected and will be used in an unbiased manner represents a competitive advantage for the European vision. Beyond data protection, other ethical aspects such as gender parity in AI will also be taken into account.

What will the structure and coordination look like for this AI community initiated by AI4EU?

AST: The project members will meet at 14 events in 14 different countries to gather as many stakeholders as possible throughout Europe. Coordinating the community is an essential aspect of this project. Weekly meetings are also planned. Every Thursday morning, as part of a “world café”, participants will share information, feedback, and engage in discussions between suppliers and users. A digital collaborative platform will also be established to facilitate interactions between stakeholders. In other words, we are sure to keep in touch!

 

AI4EU consortium members

SPARTA is a European project bringing together leading researchers in cybersecurity to respond to new challenges facing our increasingly connected society.

SPARTA: defining cybersecurity in Europe

Projets européens H2020The EU H2020 program is continuing its efforts to establish scientific communities in Europe through the SPARTA project dedicated to cybersecurity. This 3-year project will bring together researchers to take up the new cybersecurity challenges: providing defense against new attacks, offering protection in highly-connected computing environments and artificial intelligence security. Hervé Debar, a researcher in cybersecurity at Télécom SudParis participating in SPARTA, explains the content of this European initiative led by the CEA, with the participation of Télécom ParisTech, IMT Atlantique and Mines Saint-Etienne.

 

What is the goal of SPARTA?

Hervé Debar: The overall goal of SPARTA is to establish a European cybersecurity community. The future European regulation on cybersecurity proposes to found a European center for cybersecurity competencies in charge of coordinating a community of national centers. In the future, this European center will have several responsibilities, including leading the R&D program for the European Commission in the field of cybersecurity.  This will involve defining program objectives, calls for proposals, selecting projects and managing their completion.

What scientific challenges must the SPARTA project take up?

HD: The project encompasses four major research programs. The first, T-SHARK, addresses the issue of detecting and fighting against cyberattacks. The second, CAPE, is aimed at validating security and safety features for objects and services in dynamic environments. The third, HAII-T, offers security solutions for hardware environments. Finally, the fourth, SAFAIR, is aimed at ensuring secure and understandable artificial intelligence.

Four IMT schools are involved in SPARTA: Télécom SudParis, IMT Atlantique, Télécom ParisTech and Mines Saint-Étienne. What are their roles in this project?

HD: The schools will contribute to different aspects of this project. The research will be carried out within the CAPE and HAII-T programs to work on issues related to hardware certification and security, or the security of industrial systems. The schools will also help coordinate the network and develop training programs.

Where did the idea for this project originate?

HD: It all started with the call for proposals by the H2020 program for establishing and operating a pilot cybersecurity competencies network. As soon as the call was launched, the French scientific community came together to prepare and coordinate a response. The major constraints were related to the need to bring together at least 20 partners from at least 9 countries to work on 4 use cases. The project has been established with four national communities: France, Spain, Italy and Germany. It includes a total of 44 partners from 13 countries to work on 4 R&D programs.

Which use cases will you work on?

HD: The project defines several use cases—this was one of the eligibility requirements for the proposal. The first use case is that of connected vehicles, verifying their cybersecurity and operational safety features, which could be integrated into a test vehicle like EuroNCAP. The second use case will look at complex and dynamic software systems to ensure user confidence in complex computer systems and study the impact of rapid development cycles on security and reliability. The intended applications are in the areas of finance and e-government. Other uses cases will be developed over the course of the project.

What will the structure and coordination look like for this SPARTA community?

HD: A network of organizations outside SPARTA partners will be required to coordinate the community. The organizations that have been contacted are interested in the operations and results of the SPARTA project for several reasons. Two types of organizations have been contacted: professional organizations and public institutions. In terms of institutions, French regions, including Ile-de-France and Brittany, are contributing to defining the strategy and co-funding the research. In terms of professional organizations, the ACN (Alliance pour la Confiance Numérique) and competitiveness clusters like Systematic help provide information on the needs of the industrial sector and enrich the project’s activities.

 

[divider style=”solid” top=”20″ bottom=”20″]

SPARTA: a diverse community with bold ambitions

The SPARTA consortium, led by the CEA, brings together a balanced group of 44 stakeholders from 14 Member States. In France, this includes ANSSI, IMT, INRIA, Thales and YesWeHack. The consortium is seeking to re-imagine the way cybersecurity research, innovation, and training are performed in the European Union through various fields of study and expertise and scientific foundations and applications in the academic and industrial sectors. By pooling and coordinating these experiences, competencies, capacities and challenges, SPARTA will contribute to ensuring the strategic autonomy of the EU.

[divider style=”solid” top=”20″ bottom=”20″]

click

From personal data to artificial intelligence: who benefits from our clicking?

Clicking, liking, sharing: all of our digital activities produce data. This information, which is collected and monetized by big digital information platforms, is on its way to becoming the virtual black gold of the 21st century. Have we all become digital workers? Digital labor specialist and Télécom ParisTech researcher Antonio Casilli has recently published a work entitled En attendant les robots, enquête sur le travail du clic (Waiting for Robots, an Inquiry into Click Work). He sat down with us to shed some light on this exploitation 2.0.

 

Who we are, what we like, what we do, when and with whom: our virtual personal assistants and other digital contacts know everything about us. The digital space has become the new sphere of our private lives. This virtual social capital is the raw material for tech giants. The profitability of digital platforms like Facebook, Airbnb, Apple and Uber relies on the massive analysis of users’ data for advertising purposes. In his work entitled En attendant les robots, enquête sur le travail du clic (Waiting for Robots, an Inquiry into Click Work), Antonio Casilli explores the emergence of surveillance capitalism, an opaque and invisible form of capitalism marking the advent of a new form of digital proletariat: digital labor – or working with our digits. From the click worker who performs microtasks, who is aware of and paid for his activity, to the user who produces data implicitly, the sociologist analyzes the hidden face of this work carried out outside the world of work, and the all too-tangible reality of this intangible economy.

Read on I’MTech What is digtal labor?

Antonio Casilli focuses particularly on net platforms’ ability to put their users to work, convinced that they are consumers more than producers. “Free access to certain digital services is merely an illusion. Each click fuels a vast advertising market and produces data which is mined to develop artificial intelligence. Every “like”, post, photo, comment and connection fulfils one condition: producing value. This digital labor is either very poorly paid or entirely unpaid, since no one receives compensation that measures up to the value produced. But it is work nevertheless: a source of value that is traced, measured, assessed and contractually-regulated by the platforms’ terms and conditions for use,” explains the sociologist.

The hidden, human face of machine learning

For Antonio Casilli, digital labor is a new form of work which remains invisible, but is produced from our digital traces. Far from marking the disappearance of human labor with robots replacing the work they once did, this click work challenges the boundaries between work that is produced implicitly and formally recognizable employment. And for good reason: microworkers paid by the task or user-producers like ourselves are indispensable to these platforms. This data serves as the basis for machine learning models: behind the automation of a given task, such as visual or text recognition, humans are actually fueling applications by indicating clouds on images of the sky, for example, or by typing out words.

“As conventional wisdom would have it, these machines learn by themselves. But to train their algorithms to calibrate, or to improve their services, platforms need a huge number of people to train and test them,” says Antonio Casilli. One of the best-known examples is Mechanical Turk, a service offered by the American giant Amazon. Ironically, its name is a reference to a hoax that dates back to the 18th century. An automaton chess player, called the “Mechanical Turk” was able to win games against human opponents. But the Turk was actually operated by a real human hiding inside.

Likewise, certain so-called “smart” services rely heavily on unskilled workers: a sort of “artificial” artificial intelligence. In this work designed to benefit machines, digital workers are poorly paid to carry out micro-tasks. “Digital labor marks the appearance of a new way of working which can be called “taskified,” since human activity is reduced to a simple click; and “datafied” because it’s a matter of producing data so that digital platforms can obtain value from it,” explains Antonio Casilli. And this is how data can do harm. Alienation and exploitation: in addition to the internet task workers in northern countries, more commonly their counterparts in India, the Philippines and other developing countries with low average earnings, are sometimes paid less than one cent per click.

Legally regulating digital labor?

For now, these new forms of work are exempt from salary standards. Nevertheless, in recent years there has been an increasing number of class action suits against tech platforms to claim certain rights. Following the example of Uber drivers and Deliveroo delivery people, individuals have taken legal action in an attempt to have their commercial contracts reclassified as employment contacts. Antonio Casilli sees three possible ways to help combat job insecurity for digital workers and bring about social, economic and political recognition of digital labor.

From Uber to platform moderators, traditional labor law—meaning reclassifying workers as salaried employees—could lead to the recognition of their status. But dependent employment may not be a one-size-fits-all” solution. There are also a growing number of cooperative platforms being developed, where the users become owners of the means of production and algorithms.” Still, for Antonio Casilli, there are limitations to these advances. He sees a third possible solution. “When it comes to our data, we are not small-scale owners or small-scale entrepreneurs. We are small-scale data workers. And this personal data, which is neither private nor public, belongs to everyone and no one. Our privacy must be a collective bargaining tool. Institutions must still be invented and developed to make it into a real common asset. The internet is a new battleground,” says the researcher.

Toward taxation of the digital economy

Would this make our personal data less personal? “We all produce data. But this data is, in effect, a collective resource, which is appropriated and privatized by platforms. Instead of paying individuals for their data on a piecemeal basis, these platforms should return, give back, the value extracted from this data, to national or international authorities, through fair taxation, explains Antonio Casilli. In May of 2018, the General Data Protection Regulation (GDPR) came into effect in the European Union. Among other things, this text protects data as a personality attribute instead of as property. Therefore, in theory, everyone can now freely consentat any momentto the use of their personal data and withdraw this consent just as easily.

While in its current form, regulation involves a set of protective measures, setting up a tax system like the one put forward by Antonio Casilli would make it possible to establish an unconditional basic income. The very act of clicking or sharing information could give individuals a right to these royalties and allow each user to be paid for any content posted online. This income would not therefore be linked to the tasks carried out but would recognize the value created through these contributions. In 2020, over 20 billion devices will be connected to the Internet of Things. According to some estimates, the data market could reach nearly €430 billion per year by then, which is equivalent to a third of France’s GDP. Data is clearly a commodity unlike any other.

[divider style=”dotted” top=”20″ bottom=”20″]

En attendant les robots, enquête sur le travail du clic (Waiting for Robots, an Inquiry into Click Work)
Antonio A. Casilli
Éditions du Seuil, 2019
400 pages
24 € (paperback) – 16,99 € (e-book)

 

Original article in French written by Anne-Sophie Boutaud, for I’MTech.

 

Hospitals

Improving organization in hospitals through digital simulation

How can we improve emergency room wait times, the way scheduled hospitalizations are managed and cope with unexpected surges of patients? Vincent Augusto, a researcher in healthcare systems engineering at Mines Saint-Étienne is working to find solutions to these problems. He is developing programs based on digital simulation, aimed at optimizing influxes of patients and waiting times at the hospital, especially in emergency care facilities.

 

Chronic emergency department saturation and unacceptable wait times for receiving care are regularly listed among areas in need of improvement. Several of these areas have been studied: taking preventive action beforehand to reduce influxes of patients, organization within emergency departments, managing hospitalizations in advance. Vincent Augusto and his team from the MedTechDesign living lab at the engineering and healthcare center at Mines Saint-Étienne have developed models that contribute to these last two areas by providing valuable information. “We worked on successive projects with hospitals to develop programs using digital simulation. The principle is that any system can potentially be monitored and reproduced based on the data it generates; being able to process this data in real time would help to optimize resources. Unfortunately, major inequalities exist in terms of computerization from one hospital to another.

Vincent Augusto is specialized in modeling, analyzing and managing inflows of patients in hospitals. “At the hospital in Firminy, we modeled unforeseen arrivals in the emergency department to get a better idea of the number of beds required and to improve planning for scheduled patients.” The departments schedule hospitalizations for patients needing diagnostic scans or treatment. However, since it is difficult to predict the number of available places in advance, scheduled hospitalizations must sometimes be canceled at the last minute, forcing patients to wait longer to receive care. On the other hand, the shortage of beds leads to overcrowded emergency services. Improving the management of the internal and external flow of patients in hospitals is therefore of utmost importance.

A modular digital twin

At the university hospital (CHU) in Saint-Étienne, the team developed a digital twin for the emergency department. This twin helped assess the different measures that could be implemented to improve emergency operations. Vincent Augusto explains how this was developed: “First, there is an on-site observation phase. We collect data using existing software. Next, there is a development phase in which we seek to understand and model the flow of patients in the department and create an initial model on paper that is confirmed by the department staff. We can then create a digital assessment model that reproduces the way the emergency department operates, which then undergoes a validation phase.”

The researchers use the department’s activities from the previous year to accomplish this. They enter the data into the system and check if the indicators predicted by the model match those recorded at the time. This approach involves three different components: the first analyzes the patient care circuit, the second analyzes human resources based on type of activity and the third focuses on the organization and interdependence of the resources. “Once this model has been validated, we can use the modular system to test different scenarios: we can alter the human resources, simulate the arrival of an inflow of patients, reduce the wait time for taking further tests—such as scans—or the time required to transfer a patient to a hospital ward,” the researcher explains.

The first measure tested was to divide emergencies into three groups: serious emergencies (road accidents, respiratory problems, etc.), functional emergencies (sprains, wounds requiring stitches, etc.) and fast functional emergencies (requiring care that can be quickly provided). Upon entering, the patients are directed to one of these three groups led by different teams. According to Vincent Augusto and the system users, “this makes it possible to clearly assess the savings in terms of time and costs that are related to organizational changes or an increase in human resources, before any real changes are made. This is a big plus for the departments, since organizational changes can be very time-consuming and costly and sometimes have only a small impact.”

The real impact the organizational measures would have on emergency department operations was assessed and made it possible to continue work on another potential area for improvement: the creation of a psychiatric section within the emergency department, with beds reserved for these patients. To help draw up the plans for the future emergency services, the team from Mines Saint-Étienne is developing a virtual reality interface to directly and realistically view flows of patients more easily than the indicators and charts generated by the digital simulation system. The goal is to optimize the patient circuit within the department and the medical care they receive.

Improving hospitals’ resilience in unexpected events

This method also offers management support for crisis situations involving a massive influx of patients to the emergency department in the event of disasters, attacks or epidemics. “The system was developed to manage, in addition to the usual flow of patients, an exceptional yet predictable arrival of patients,” the researcher explains. It is therefore useful in voltage plans: exceptional situations that push the system beyond its capacity. In these cases, the department must face a critical situation of responding to hospital emergencies that can lead to a French emergency “white plan” being declared, in which non-priority activities are cancelled.

To accomplish this, the program is updated in real time via a direct connection to the hospital’s computer systems. It can therefore determine the exact state of the department at any time. By entering a varying number of patients with specific pathologies in a given situation (flu-related respiratory difficulties, gunshot wounds, etc.), the simulation can determine the most effective measures to take. This is what the engineers call an operational tool. “In the short and medium term, the departments now have a tool that can help them optimize their response to the problems they face and improve the care patients receive,” concludes Vincent Augusto.

Original article in French written by Sarah Balfagon, for I’MTech.

Wi6labs

Wi6labs: customized sensor networks

Wi6labs, a start-up incubated at IMT Atlantique, installs connected sensor networks for municipalities and industries. What makes this startup so unique? It offers custom-developed private networks that are easy to install. When it comes to controlling energy networks, water supply and monitoring air quality, the solution proposed by Wi6labs is attractive due to its simplicity and the savings it offers. The startup is part of the IMT delegation to CES 2019 in Las Vegas.

 

It all started three years ago. In July 2016, the mayor of Saint-Sulpice-la-Forêt, a municipality located 10km northeast of Rennes, France, became aware of a leak in the city’s water system. For one year, the municipality’s water bill had been constantly increasing. All in all, the water leaked was equivalent to 26 Olympic-sized swimming pools. The fact that this leak was discovered came as a relief to the mayor. But how could he prevent undetected occurrences like this from happening again? To avoid wasting more water, Saint-Sulpice-la-Forêt contacted a local start-up: Wi6labs.

We proposed installing sensors in the water system,” recalls the start-up’s founder, Ulrich Rousseau. “In just one night, these objects can detect and locate a leak.” Satisfied with the results, the mayor renewed the partnership to monitor the temperature and energy consumption in public buildings. The sensor network revealed, for example, that the town’s school was being heated at night and during school vacations. By adapting its practices based on data from the connected sensors, the municipality saved €7,400 of its annual energy expenditure of €50,000 over the next year. “The investment of €20,000 for installing our solution paid for itself in three years,” Ulrich Rousseau explains.

For Wi6labs, the Saint-Sulpice-la-Forêt experience was a pilot experiment used to test the start-up’s relevance. The operation’s success allowed them to propose this solution to other local municipalities and companies. Each time, there was a common theme: a water leak. “It’s our starting point with customers. They all deal with this problem and are convinced that our approach will help them manage it,” he explains. Once the system is installed for the water meter and the initial data is retrieved, the changes in practices aimed at reducing the water bill provide convincing proof for continuing with the operation.

The start-up then eventually offers its customers solutions for monitoring air quality and adjusting gas consumption. In their partnership with Keolis, a public transport operator, Wi6labs developed a sensor network to inform the company of the number of passengers using its buses in real time. “We study specific cases, for both municipalities and companies, and we respond with a customized solution that meets a wide range of needs,” Ulrich Rousseau explains.

Wi6labs conquers dead zones

All the start-up’s solutions are built on its product Wiotys, a platform used to control a LPWAN network. These low-power, long-range networks enable communication between connected objects. Wiotys makes it possible to install sensor networks that are independent and isolated. In other words, the sensors used by Saint-Sulpice-la-Forêt only communicate amongst themselves and are controlled locally. This approach is therefore different from those used by telecommunication operators like Orange and Bouygues, which deploy national networks connecting the sensors.

This difference has vast implications. First, there are the advantages. Wiotys networks are not limited by the dead zones in the major operators’ networks. Saint-Sulpice-la-Forêt, for example, does not benefit from any LPWAN networks from national operators. It is therefore impossible to connect their sensors to a national network. Secondly, this allows them to create custom solutions. For example, if a company wants to charge its customers based on data from a sensor, it must send information through the network’s downlink channel, in other words, in the opposite direction from the uplink channel, which sends information from the sensor to the platform. “Operators are not comfortable doing this because it is expensive to reserve part of the network for downlink data transmission to the sensor. For us, it is simply a question of taking this need into account when dimensioning the network,” Ulrich Rousseau explains.

However, they cannot offer some of the features operators can. This is the case with roaming—a sensor’s capacity to switch from one connection terminal to another as it moves.  “For our customers, this is not generally a problem, since water meters and air sensors are stationary,” the founder of Wi6labs explains. The start-up has strategically chosen to eliminate certain complex features to make the installation easier. “What we sell our customers is a quick solution that is easy to deploy. It’s a little like installing a router at home: you plug it in, and it works.

Today, Ulrich Rousseau assures us that the start-up no longer experiences any technological barriers. Its use cases have involved working 20 meters underground and responding to complex requests from customers. The true limit is that of social acceptability, especially for municipalities. “All of the sudden, we must explain to the civil servant who used to enter meter readings into an Excel spreadsheet that our sensors will be taking over this task,” Ulrich Rousseau explains. “We have to change his tasks and train him to learn how to control the sensors.

These are no small changes for civil servants who for years have performed tasks unrelated to digital technology. For a municipality, this also requires adjustments to integrate training time and new tasks for civil servants. Social resistance can therefore by significant and the legitimacy of these reactions should not be minimized. According to Ulrich Rousseau, Wi6labs is also responsible for explaining the significant and valuable results of these changes. “We must be educators. For us, this involves showing local citizens and civil servants the savings in euros for the municipality in practical terms, rather than talking about kilowatt hours.” In essence: changing citizens’ perception of energy to increase their awareness of the energy and environmental transition.