AI4EU: a project bringing together a European AI community

Projets européens H2020On January 10th, the AI4EU project (Artificial Intelligence for the European Union), an initiative of the European Commission, was launched in Barcelona. This 3-year project led by Thalès, with a budget of €20 million, aims to bring Europe to the forefront of the world stage in the field of artificial intelligence. While the main goal of AI4EU is to gather and coordinate the European AI community as a single entity, the project also aims to promote EU values: ethics, transparency and algorithmic explainability. TeraLab, the AI platform at IMT, is an AI4EU partner. Interview with its director, Anne-Sophie Taillandier.

 

What is the main goal of the AI4EU H2020 project?

Anne-Sophie Taillandier: To create a platform bringing together the Artificial Intelligence (AI) community and embodying European values: sovereignty, trust, responsibility, transparency, explainability… AI4EU seeks to make AI resources, such as data repositories, algorithms and computing power, available for all users in every sector of society and the economy. This includes everyone from citizens interested in the subject, SMEs seeking to integrate AI components, start-ups, to large groups and researchers—all with the goal of boosting innovation, reinforcing European excellence and strengthening Europe’s leading position in the key areas of artificial intelligence research and applications.

What is the role of this platform?

AST: It primarily plays a federating role. AI4EU, with 79 members in 21 EU countries, will provide a unique entry point for connecting with existing initiatives and accessing various competences and expertise pooled together in a common base. It will also play a watchdog role and will provide the European Commission with the key elements it needs to orient its AI strategy.

TeraLab, the IMT Big Data platform, is also a partner. How will it contribute to this project?

AST: Along with Orange, TeraLab coordinates the “Platform Design & Implementation” work package. We provide users with experimentation and integration tools that are easy to use without prior theoretical knowledge, which accelerates the start-up phase for projects developed using the platform. For common questions that arise when launching a new project, such as the necessary computing power, data security, etc., TeraLab offers well-established infrastructure that can quickly provide solutions.

Which use cases will you work on?

AST: The pilot use cases focus on public services, the Internet of Things (IoT), cybersecurity, health, robotics, agriculture, the media and industry. These use cases will be supplemented by open calls launched over the course of the project. These open calls will target companies and businesses that want to integrate platform components into their activities. They could benefit from the sub-grants provided for in the AI4EU framework: the European Commission funds the overall project, which itself funds companies proposing convincing project through the total dedicated budget of €3 million.

Ethical concerns represent a significant component of European reflection on AI. How will they be addressed?

AST: They certainly represent a central issue. The project governance will rely on a scientific committee, an industrial committee as well as an ethics committee that will ensure transparency, reproducibility and explainability by means of tools including charters, indicator and labels. Far from representing an obstacle to business development, the emphasis on ethics creates added value and a distinguishing feature for this platform and community. The guarantee that the data will be protected and will be used in an unbiased manner represents a competitive advantage for the European vision. Beyond data protection, other ethical aspects such as gender parity in AI will also be taken into account.

What will the structure and coordination look like for this AI community initiated by AI4EU?

AST: The project members will meet at 14 events in 14 different countries to gather as many stakeholders as possible throughout Europe. Coordinating the community is an essential aspect of this project. Weekly meetings are also planned. Every Thursday morning, as part of a “world café”, participants will share information, feedback, and engage in discussions between suppliers and users. A digital collaborative platform will also be established to facilitate interactions between stakeholders. In other words, we are sure to keep in touch!

 

AI4EU consortium members

SPARTA is a European project bringing together leading researchers in cybersecurity to respond to new challenges facing our increasingly connected society.

SPARTA: defining cybersecurity in Europe

Projets européens H2020The EU H2020 program is continuing its efforts to establish scientific communities in Europe through the SPARTA project dedicated to cybersecurity. This 3-year project will bring together researchers to take up the new cybersecurity challenges: providing defense against new attacks, offering protection in highly-connected computing environments and artificial intelligence security. Hervé Debar, a researcher in cybersecurity at Télécom SudParis participating in SPARTA, explains the content of this European initiative led by the CEA, with the participation of Télécom ParisTech, IMT Atlantique and Mines Saint-Etienne.

 

What is the goal of SPARTA?

Hervé Debar: The overall goal of SPARTA is to establish a European cybersecurity community. The future European regulation on cybersecurity proposes to found a European center for cybersecurity competencies in charge of coordinating a community of national centers. In the future, this European center will have several responsibilities, including leading the R&D program for the European Commission in the field of cybersecurity.  This will involve defining program objectives, calls for proposals, selecting projects and managing their completion.

What scientific challenges must the SPARTA project take up?

HD: The project encompasses four major research programs. The first, T-SHARK, addresses the issue of detecting and fighting against cyberattacks. The second, CAPE, is aimed at validating security and safety features for objects and services in dynamic environments. The third, HAII-T, offers security solutions for hardware environments. Finally, the fourth, SAFAIR, is aimed at ensuring secure and understandable artificial intelligence.

Four IMT schools are involved in SPARTA: Télécom SudParis, IMT Atlantique, Télécom ParisTech and Mines Saint-Étienne. What are their roles in this project?

HD: The schools will contribute to different aspects of this project. The research will be carried out within the CAPE and HAII-T programs to work on issues related to hardware certification and security, or the security of industrial systems. The schools will also help coordinate the network and develop training programs.

Where did the idea for this project originate?

HD: It all started with the call for proposals by the H2020 program for establishing and operating a pilot cybersecurity competencies network. As soon as the call was launched, the French scientific community came together to prepare and coordinate a response. The major constraints were related to the need to bring together at least 20 partners from at least 9 countries to work on 4 use cases. The project has been established with four national communities: France, Spain, Italy and Germany. It includes a total of 44 partners from 13 countries to work on 4 R&D programs.

Which use cases will you work on?

HD: The project defines several use cases—this was one of the eligibility requirements for the proposal. The first use case is that of connected vehicles, verifying their cybersecurity and operational safety features, which could be integrated into a test vehicle like EuroNCAP. The second use case will look at complex and dynamic software systems to ensure user confidence in complex computer systems and study the impact of rapid development cycles on security and reliability. The intended applications are in the areas of finance and e-government. Other uses cases will be developed over the course of the project.

What will the structure and coordination look like for this SPARTA community?

HD: A network of organizations outside SPARTA partners will be required to coordinate the community. The organizations that have been contacted are interested in the operations and results of the SPARTA project for several reasons. Two types of organizations have been contacted: professional organizations and public institutions. In terms of institutions, French regions, including Ile-de-France and Brittany, are contributing to defining the strategy and co-funding the research. In terms of professional organizations, the ACN (Alliance pour la Confiance Numérique) and competitiveness clusters like Systematic help provide information on the needs of the industrial sector and enrich the project’s activities.

 

[divider style=”solid” top=”20″ bottom=”20″]

SPARTA: a diverse community with bold ambitions

The SPARTA consortium, led by the CEA, brings together a balanced group of 44 stakeholders from 14 Member States. In France, this includes ANSSI, IMT, INRIA, Thales and YesWeHack. The consortium is seeking to re-imagine the way cybersecurity research, innovation, and training are performed in the European Union through various fields of study and expertise and scientific foundations and applications in the academic and industrial sectors. By pooling and coordinating these experiences, competencies, capacities and challenges, SPARTA will contribute to ensuring the strategic autonomy of the EU.

[divider style=”solid” top=”20″ bottom=”20″]

AFA 7 nouveaux projets, German-French Academy

Industry of the future: The German-French Academy launches seven new projects

Following a call for proposals launched by the German-French Academy for researchers at IMT and TUM (Technische Universität Munchen), seven projects were selected in October 2018. The projects focus on key topics for the German-French Academy for the Industry of the Future. A French-German platform for AI will soon be launched.

 

The selected projects focus on six topics: AI for the industry of the future, advanced manufacturing, advanced materials, supply chain and logistics, energy efficiency, and industrial design and processes. They will be funded by the German-French Academy for the Industry of the Future founded by IMT and TUM, for the initial seed stage.

For Christian Roux, Executive Vice President for Research and Innovation at IMT, “The German-French Academy is expanding its scope of exploration to provide solutions to strategic topics related to the industry of the future in order to support and accelerate the digital transformation of French and German industry.”

Alloy Design for Additive Manufacturing (ADAM)

This project focuses on additive manufacturing, in particular through laser melting (LBM, Laser Beam Melting). It aims to optimize the choice of the alloy composition used in this additive manufacturing process so as to limit defaults and optimize mechanical properties in the final product. This optimization will be based on processing large amounts of data collected by the research team at Mines Saint-Étienne, and on experimentation resources equipped with very high-speed cameras at TUM.

Additive Manufacturing for the Industry of the Future

This project aims to analyze the impact of the introduction of additive manufacturing in industry, focusing on three main areas. The first area involves industrial organization (supply chain, use of opensource materials, integration in new innovation ecosystems), the second concerns companies (new duties, new skills, new business models) and the final area focuses on organizational changes in the design process (new possibilities for design, mass customization, user-centered design etc.). The changes resulting directly from the introduction of additive manufacturing itself will also be studied.

Smart Artificial Intelligence based Modeling and Automation of Networked Systems (AI4Performance)

This project is intended to develop a smart approach for testing and evaluating networked systems (while collecting data at the same time). The process will be based on using innovative learning methods (Graph Neural Network) on data provided by partners Cisco and Airbus. This will involve analyzing the impact of changes (increase in the number of users, integration of new sub-systems, virtual machines etc.), detecting bottlenecks and analyzing the root cause as well as detecting malfunctions.

Data-driven Collaboration in Industrial Supply Chains (DISC)

Against the backdrop of digital transformation of industry, this project focuses on supply chain optimization through collaboration, especially in terms of incentives for information sharing. The approach will rely on methods derived from game theory to improve the decision-making process, which is increasingly decentralized as a result of digital transformation.

Modeling and decision-making platform for Reconfigurable, Digitalized Servitized Production systems (RDS-Production)

This research project aims to develop methods for designing reconfigurable production systems based on modeling interoperable components and software (using digital twins), AI techniques and operational research for decision-making support in reconfigurations, service life cycle approaches for production system equipment, and multi-criteria decision-making methods.

Smart Sensor Technology with Decentralized Processing Architecture

This project seeks to develop a new approach for taking sensors into account in systems such as automobiles or eHealth. The process will use smart sensors to distribute and process data starting from the sensor level, in different layers, so that through this multi-layer and adaptable system, storage and processing needs will be distributed in an optimized way to ensure security, reliability, robustness and scalability.

A French-German platform for AI

Joint Platform and Tools for Artificial Intelligence based on Data-Intensive Research and Data Governance.

TeraLab, IMT’s artificial intelligence platform, and TUM will work together to create a shared platform for AI. This will allow researchers at both institutes and their industrial partners to work in close collaboration through shared, secure access to data. The project also includes the possibility for researchers to showcase their research and results, with the development of tools that can test the algorithms and data sets used. This secure, neutral and trustworthy service will facilitate the reproducibility of results within a shared framework of good practices.

[divider style=”normal” top=”20″ bottom=”20″]

For five years IMT has been developing Teralab, an AI platform aimed at accelerating research and innovation, giving researchers, innovative companies and industrial players the opportunity to work together in a secure, sovereign, neutral environment using real data.  

A true success story of the Investissements d’avenir (Future Investments) program, TeraLab is now involved in 60 industrial projects and is a key player in three H2020 projects on industry (MIDIH, BOOST 4.0, AI4EU). It has been awarded the “Silver i-Space” label at the European level for three years.

[divider style=”normal” top=”20″ bottom=”20″]

Browse all articles on the German-French Academy for the Industry of the Future

 

 

ANIMATAS

Robots teaching assistants

Projets européens H2020The H2020 ANIMATAS project, launched in January 2018 for a four-year period, seeks to introduce robots with social skills in schools to assist teaching staff. Chloé Clavel, a researcher in affective computing at Télécom ParisTech, one of the project’s academic partners, answered our questions on her research in artificial intelligence for the ANIMATAS project.

 

What is the overall focus of the European project ANIMATAS?

The ANIMATAS project focuses on an exciting application of research in artificial intelligence related to human-agent interactions: education in schools. The project also contributes to other work and research being carried out to integrate new technologies into new educational methods for schools.

The project’s objectives are focused on research in affective computing and social robotics. More specifically, the project aims to develop computational models to provide the robots and virtual characters with social skills in the context of interactions with children and teachers in the school setting.

What are the main issues you are facing?

We are working on how we can incorporate robots to allow children to learn a variety of skills, such as computational thinking or social skills. The first issue concerns the robot or virtual character’s role in this learning context and in the pre-existing interactions between children and teachers (for example, counselors, colleagues or partners in the context of a game).

Another important issue relates to the capacity of the underlying computational models responsible for the robots’ behavior to adapt to a variety of situations and different children. The objective is for the robot to be attentive to its environment and remain active in its learning while interacting with children.

Finally, there are significant ethical issues involved in the experiments and the development of computational models in accordance with European recommendations. These issues are handled by the ethics committee.

Who else is involved with you in this project, and what are the important types of collaboration in your research?

We have partners from major European academic laboratories. The majority are researchers in the field of affective computing, but also include researchers in educational technologies from the École Polytechnique Fédérale de Lausanne (EPFL), with whom we are working on the previously mentioned issue of the robot’s role in the learning process.

Three researchers from Télécom ParisTech are involved in this project: Giovanna Varni and myself, from the Images, Data and Signal Department and Nicolas Rollet, from the Economic and Social Sciences Department.

ANIMATAS requires skills in computer science, linguistics, cognitive sciences and pedagogy… What areas are the researchers from Télécom ParisTech contributing to?

Télécom ParisTech is contributing skills in affective computing and computational linguistics. More specifically, my PhD student, Tanvi Dinkar, and I are working on the automatic analysis of disfluencies (for example, hesitations, unfinished words or sentences) as a sign of a child’s emotions or stress in the learning process, or their level of confidence in their skills (feeling of knowledge) in the context of their interactions with other children, the teacher or the robot.

How will the results of ANIMATAS be used?

The project is a research training network, and one of its main objectives is to train PhD students (15 for ANIMATAS) in research as well as uniting work around affective computing and social robotics for education.

The project also includes unfunded partners such as companies in the field of robotics and major US laboratories such as ICT (Institute of Creative Technologies) and USC (University of Southern California) who provide us with regular feedback on scientific advances made by ANIMATAS and their industrial potential.

A workshop aimed at promoting ANIMATAS research among industrialists will be organized in September 2020 at Télécom ParisTech.

What is the next step in this project?

The kick-off of the project took place in February 2018. We are currently working with schools and educational partners to define interaction scenarios and learning tasks to collect the data we will use to develop our social interaction models.

Read more on I’MTech:

Empathic

AI to assist the elderly

Projets européens H2020Caring and expressive artificial intelligence? This concept that seems to come straight from a man-machine romance like the movie “Her”, is in fact at the heart of a Horizon 2020 project called EMPATHIC. The project aims to develop software for a virtual and customizable coach for assisting the elderly. To learn more, we interviewed the project’s Scientific Director for Télécom SudParis and expert in voice recognition, Dijana Petrovska-Delacretaz.

[divider style=”normal” top=”20″ bottom=”20″]

The original version of this article (in French) was published on the Télécom SudParis website.

[divider style=”normal” top=”20″ bottom=”20″]

What is the goal of the Empathic project?

Dijana Petrovska-Delacretaz: The project’s complete title is “Empathic Expressive Advanced Virtual Coach to Improve Independent Healthy Life-Years of the Elderly”. The goal is to develop a virtually advanced, empathetic and expressive coach to assist the elderly in daily life. This interface equipped with artificial intelligence and featuring several options would be adaptable, customizable and available on several types of media: PC, smartphone or tablet. We want to use a wide range of audiovisual information and interpret it to take the users’ situation into account and try to respond and interact with them in the most suitable way possible to offer them assistance. This requires us to combine several types of technology, such as signal processing and biometrics.

How will you design this AI?

DPD: We will combine signal processing and speech, face and shape recognition using “deep networks” (networks of deep artificial neurons). In other words, we will reproduce our brain’s structure using computer and processor calculations. This new paradigm has become possible thanks to the massive increase in storage and calculation capacities. They allow us to do much more in much less time.

We also use 3D modeling technology to develop avatars with more expressive faces–whether it be a human, cat, robot, hamster or even a dragon–that can be adapted to the user’s preferences to facilitate the dialogue between the user and virtual coach.   The most interesting solution from a biometric and artificial intelligence standpoint is to include as many options as possible: from using the voice and facial expressions to recognizing emotions.  All of these aspects will help the virtual coach recognize its user and have an appropriate dialogue with him or her.

Why not create a robot assistant?

The robot NAO is used at Télécom SudParis in work on recognition and gesture interpretation.

The robot NAO is used at Télécom SudParis in work on recognition and gesture interpretation.

DPD: The software can be fully integrated into a robot like Nao, which is already used here on the HadapTIC and EVIDENT platforms. This is certainly a feasible alternative: rather than have a virtual coach on a computer or smartphone, the coach can be there in person. The advantage with a virtual coach is that it is much easier to bring with me on my tablet when I travel.

In addition, from a security standpoint, a robot like Nao is not allowed access everywhere. We really want to develop a system that is very portable and not bulky. The challenge is to combine all of these types of technology and make them work together so we can interact with the artificial intelligence based on scenarios that are best suited to an elderly individual on different types of devices.

Who is participating in the Empathic project?

DPD: The Universidad del Pais Vasco in Bilbao is coordinating the project and is contributing to the dialogue aspect. We are complementing this aspect by providing the technology for vocal biometrics and emotional and face recognition, as well as the avatar’s physical appearance. The industrial partners involved in the project are Osatek, Tunstall, Intelligent Voice and Acapela. Osatek is the largest Spanish company specializing in connected objects for patient monitoring and home care. Tunstall also specializes in this type of material. Intelligence Voice and Acapela, on the other hand, will primarily provide assistance in voice recognition and speech synthesis. In addition, Osatek and Intelligent Voice will be responsible for ensuring the servers are secure and storing the data. The idea is to create a prototype that they will then be able to use.

Finally, the e-Seniors association, the Seconda Universidad of Italy and the Oslo University Hospital will provide the 300 users-testers to test the EMPATHIC interface throughout the project. This provides us with significant statistical validity.

What are the major steps in the project?

DPD: The project was launched in November 2017 and our research will last three years. We are currently preparing for an initial acquisition stage which take place in the spring of 2018. For this first stage, we will make recordings with a “Wizard of Oz” system in the EVIDENT living lab at Télécom SudParis. With this system, we can simulate the virtual agent with a person who is located in another room in order to collect data on the interaction between the user and the agent. This system allows us to create more complex scenarios that our artificial intelligence will then take into account. This first stage will also provide us with additional audiovisual data and allow us to select the best personifications of virtual coaches to offer to users. I believe this is important.

What are the potential opportunities and applications?

DPD: The research project is intended to produce a prototype. The idea behind having industrial partners is for them to quickly obtain and adapt this prototype to suit their needs. The goal of this project is not to produce a specific innovation, but rather to combine and adapt all these different types of technology so that they reach their potential through specific cases—primarily for assisting the elderly.

Another advantage of EMPATHIC is that it reveals many possible applications in other areas: including video games and social networks.  A lot of people interact with avatars in virtual worlds today–like in the video game Second Life, which is where one of our avatars, Bob the Hawaiian, comes from. EMPATHIC could therefore definitely be adapted outside the medical sector.

Do not confuse virtual and reality

The “Uncanny Valley” is a central concept in artificial intelligence and robotics. It theorizes that if a robot or AI possess a human form (whether physical or virtual), it should not resemble humans too closely–unless this resemblance is perfectly achieved from every angle.  “As soon as there is a strong resemblance with humans, the slightest fault is strange and disturbing. In the end, it becomes a barrier in the interactions with the machine,” explains Patrick Horain, a research engineer at Télécom SudParis, specialized in digital imaging.

Dijana Petrovska-Delacretaz has integrated this aspect and is developing the EMPATHIC coaches accordingly: “It is important to keep this in mind. In general, it is a scientific challenge to make a photo-realistic face that actually looks like a loved one without being able to distinguish the real from the fake. But in the context of our project, it is a problem. It could be disturbing or confusing for an elderly person to interact with an avatar that looks like someone real or even a loved one. It is always preferable to propose virtual coaches that are clearly not human.”

[divider style=”normal” top=”20″ bottom=”20″]

Do not confuse virtual and reality

The “Uncanny Valley” is a central concept in artificial intelligence and robotics. It theorizes that if a robot or AI possess a human form (whether physical or virtual), it should not resemble humans too closely–unless this resemblance is perfectly achieved from every angle.  “As soon as there is a strong resemblance with humans, the slightest fault is strange and disturbing. In the end, it becomes a barrier in the interactions with the machine,” explains Patrick Horain, a research engineer at Télécom SudParis, specialized in digital imaging.

Dijana Petrovska-Delacretaz has integrated this aspect and is developing the EMPATHIC coaches accordingly: “It is important to keep this in mind. In general, it is a scientific challenge to make a photo-realistic face that actually looks like a loved one without being able to distinguish the real from the fake. But in the context of our project, it is a problem. It could be disturbing or confusing for an elderly person to interact with an avatar that looks like someone real or even a loved one. It is always preferable to propose virtual coaches that are clearly not human.”

[divider style=”normal” top=”20″ bottom=”20″]

 

Papaya

PAPAYA: a European project for a confidential data analysis platform

Projets européens H2020EURECOM is coordinating the three-year European project, PAPAYA, launched on May 1st. Its mission: enable cloud services to process encrypted or anonymized data without having to access the unencrypted data. Melek Önen, a researcher specialized in applied cryptography, is leading this project. In this interview she provides more details on the objectives of this H2020 project.

 

What is the objective of the H2020 Papaya project?

Melek Önen: Small and medium-sized companies do not always have the means to internally process large amounts of data that is often personal or confidential. They therefore use cloud services to simplify the task, but in so doing they lose control over their data. Our mission with the PAPAYA project (which stands for PlAtform for PrivAcY-preserving data Analytics) is to succeed in using data processing and classification methods while keeping the data encrypted and/or anonymized. This would offer companies greater security and confidentiality when they use third party cloud services, since these services could no longer access the unencrypted data. This has become a major issue since the European General Data Protection Regulation (GDPR) has come into effect.

What is your main challenge in this project?

MÖ: Today, when we encrypt data the traditional way, it is protected in a randomized manner—in other words, using a method that lacks transparency. It is impossible to carry out operations on data in this state. In 2009, cryptography researcher Craig Gentry proposed a unique method called fully homomorphic encryption. Using this method, several operations can be carried out on encrypted data. The problem is, processing data this way is not very efficient in terms of memory usage and the processes required. The majority of our work will involve designing variants of the data processing algorithms that will be compatible with data protected by homomorphic encryption.

Can you explain how you design variants of data processing algorithms?

MÖ: For example, a neural network contains both linear operations that are easily managed with appropriate encryption methods as well as non-linear operations.  We do not know how to process encrypted data using non-linear operations. Yet the network’s accuracy depends on these non-linear operations, so we cannot do without them. What we must do in this situation is approximate these operations, which are actually functions, by using other linear functions with similar behavior. The more effective this approximation, the more accurate the neural network, and we can therefore process the encrypted data.

What use cases do you plan to work on?

MÖ: We have two different use cases. The first is medical data encryption. This situation affects many hospitals that have patients’ data but are not large enough to have their own internal data processing services. They therefore use cloud services. The second case involves web analytics and it could be useful for the tourism sector. Data collected by smartphone users could be very useful in this sector that analyzes the way tourists move from one place of interest to another. For both cases, we imagine several progressive scenarios. First, for one data owner who has all the users’ unencrypted data that he encrypts with the same key and transfers to the cloud. Next, several owners with several keys. Finally, we consider data that comes directly from users.

Who else is working on this project with you?

MÖ: PAPAYA brings together 6 partners including EURECOM, which is coordinating the action. The companies involved in assisting us with the use cases and in designing this new platform are Atos, IBM, Haifa Research Lab, Orange Labs, and MediaClinics—an SME that makes sensors for monitoring patients in hospitals. In terms of academic partners, we are working with Karlstad University in Sweden. We will work together for the entire three-year project.

MeMAD

Putting sound and images into words

Projets européens H2020Can videos be turned into text? MeMAD, an H2020 European project launched in January 2018 and set to last three years, aims to do precisely that. While such an initiative may seem out of step with today’s world, given the increasingly important role video content plays in our lives, in reality, it addresses a pressing issue of our time. MeMAD strives to develop technology that would make it possible to fully describe all aspects of a video, from how people move, to background music, dialogues or how objects move in the background etc. The goal: create a multitude of metadata for a video file so that it is easier to search for in databases. Benoît Huet, an artificial intelligence technologies researcher at EURECOM — one of the project partners — talks to us in greater detail about MeMAD’s objectives and the scientific challenges facing the project. 

 

Software that automatically describes or subtitles videos already exists. Why devote a Europe-wide project such as MeMAD to this topic?

Benoît Huet: It’s true that existing applications already address some aspects of what we are trying to do. But they are limited in terms of usefulness and effectiveness. When it comes to creating a written transcript of the dialogue in videos, for example, automatic software can make mistakes. If you want correct subtitles you have to rely on human labor which has a high cost. A lot of audiovisual documents aren’t subtitled because it’s too expensive to have them made. Our aim with MeMAD is, first of all, to go beyond the current state of the art for automatically transcribing dialogue and, furthermore, to create comprehensive technology that can also automatically describe scenes, atmospheres, sounds, and name actors, different types of shots etc. Our goal is to describe all audiovisual content in a precise way.

And why is such a high degree of accuracy important?

BH: First of all, in its current form audiovisual content is difficult to access for certain communities, such as the blind or visually impaired and individuals who are deaf or hard of hearing. By providing a written description of a scene’s atmosphere and different sounds, we could enhance the experience for individuals with hearing problems as they watch a film or documentary. For the visually impaired, the written descriptions could be read aloud. There is also tremendous potential for applications for creators of multimedia content or journalists, since fully describing videos and podcasts in writing would make it easier to search for them in document archives. Descriptions may also be of interest for anyone who wants to know a little bit about a film before watching it.”

The National Audiovisual Institute (INA), one of the partner’s projects, possesses extensive documentary and film archives. Can you explain exactly how you are working with this data?

BH: At EURECOM we have two teams involved in the MeMAD project who are working on these documents. The first team focuses on extracting information. It uses technology based on deep neural networks to recognize emotions, analyze how objects and people move, the soundtrack etc.  In short, everything that creates the overall atmosphere. The scientific work focuses especially on developing deep neural network architectures to extract the relevant metadata from the information contained in the scene. The INA also provides us with concrete situations and the experience of their archivists to help us understand which metadata is of value in order to search within the documents. And at the same time, the second team focuses on knowledge engineering. This means that they are working on creating well-structured descriptions, indexes and everything required to make it easier for the final user to retrieve the information.

What makes the project challenging from a scientific perspective?

BH: What’s difficult is proposing something comprehensive and generic at the same time. Today our approaches are complete in terms of quality and relevance of descriptions. But we always use a certain type of data. For example, we know how to train the technology to recognize all existing car models, regardless of the angle of the image, lighting used in the scene etc. But, if a new car model comes out tomorrow, we won’t be able to recognize it, even if it is right in front of us. The same problem exists for political figures or celebrities. Our aim is to create technology that works not only based on documentaries and films of the past, but that will also able to understand and recognize prominent figures in documentaries of the future. This ability to progressively increase knowledge represents a major challenge.

What research have you drawn on to help meet this scientific challenge?

BH: We have over 20 years of experience in research on audiovisual content to draw on. This justifies our participation in the MeMAD project. For example, we have already worked on creating automatic summaries of videos. I recently worked with IBM Watson to automatically create a trailer for a Hollywood film. I am also involved in the NexGenTV project along with Raphaël Troncy, another contributor to the MeMAD project. With NexGenTV, we’ve demonstrated how to automatically recognize the individuals on screen at a given moment. All of this has provided us with potential answers and approaches to meet the objectives of MeMAD.

Also read on I’MTech

HyBlockArch

HyBlockArch: hybridizing the blockchain for the industry of the future

Within the framework of the German-French Academy for the Industry of the Future, a partnership between IMT and Technische Universität München (TUM), the HyBlockArch project examines the future of the blockchain. This project aims to adapt this technology to an industrial scale to create a powerful tool for companies. To accomplish this goal, the teams led by Gérard Mémmi (Télécom ParisTech) and Georg Carle (TUM) are working on new blockchain architectures. Gérard Memmi shares his insight.

 

Why are you looking into new blockchain architectures?

Gérard Mémmi: Current blockchain architectures are limited in terms of performance in the broadest sense: turnaround time, memory, energy… In many cases, this hinders the blockchain from being developed in Industry 4.0. Companies would like to see faster validation times or to be able to put even more information into a blockchain block. A bank that wants to track an account history over several decades will be concerned about the number of blocks in the blockchain and the possible increase in block latency times. Yet today we cannot foresee the behavior of blockchain architectures for many years to come. There is also the energy issue: the need to reduce consumption caused by the proof of work required to enter data into a blockchain, while still ensuring a comparable level of security.  We must keep in mind that the bitcoin’s proof of work consumes the same amount of electrical energy as a country like Venezuela.

What type of architecture are you trying to develop with the HyBlockArch project?

GM: We are working on hybrid architectures. These multi-layer architectures make it possible to reach an industrial scale. We start with a blockchain protocol in which each node of the ledger communicates with a mini data storage network on a higher floor. This is not necessarily a blockchain protocol and it can operate slightly differently while still maintaining similar properties. The structure is transparent for the users; they do not notice a difference. The miners who perform the proof of work required to validate data only see the blockchain aspect. This is an advantage for them, allowing them to work faster without taking the upper layer of the architecture into account.

What would the practical benefits be for a company?

GM: For a company this would mean smart contracts could be created more quickly and the computer operations that rely on this architecture would have shorter latency times, resulting in a broader scope of application. The private blockchain is very useful in the field of logistics. For example, each time a product changes hands, as from the vendor to the carrier, the operation is recorded in the blockchain. A hybrid architecture records this information more quickly and at a lower cost for companies.

This project is being carried out in the framework of the German-French Academy for the Industry of the Future. What is the benefit of this partnership with Technische Universität München (TUM)?

GM: Our German colleagues are developing a platform that measures the performance of the different architectures. We can therefore determine the most optimal architecture in terms of energy savings, fast turnaround and security for typical uses in the industry of the future. We contribute a more theoretical aspect: we analyze the smart contracts to develop more advantageous protocols, and we work with proof of work mechanisms for recording information in the blockchain.

What does this transnational organization represent in the academic field?

GM: This creates a European dynamic in the work on this issue. In March we launched a blockchain alliance between French institutes: BART. By working together with TUM on this topic, we are developing a Franco-German synergy in an area that only a few years ago was only featured as a minor issue at a research conference, as the topic of only one session. The blockchain now has scientific events all to itself. This new discipline is booming and through the HyBlockArch project we are participating in this growth at the European level.

 

C2Net

C2Net: supply chain logistics on cloud nine

Projets européens H2020A cloud solution to improve supply chain logistics? This is the principle behind the European C2Net project. Launched on January 1, 2015, the project was completed on December 31, 2017. The project has successfully demonstrated how a cloud platform can enable the various players in a supply chain to better anticipate and manage future problems. To do so, C2Net drew on research on interoperability and on the automation of alerts using data taken directly from companies in the supply chain. Jacques Lamothe and Frédérick Benaben, researchers in industrial engineering, on logistics and information systems respectively, give us an overview of the work they carried out at IMT Mines Albi on the C2Net project.

 

What was the aim of the C2Net project?

Jacques Lamothe: The original idea was to provide cloud tools for SMEs to help them with advanced supply chain planning. The goal was to identify future inventory management problems companies may have well in advance. As such, we had to work on three parts: recovering data from SMEs, generating alerts for issues to be resolved, and monitoring planning activity to see if everything went as intended. It wasn’t easy because we had to respond to interoperability issues — meaning data exchange between the different companies’ information systems. And we also had to understand the business rules of the supply chain players in order to evaluate the relevant alerts.

Could you give us an example of the type of problem a company may face?

Frédérick Benaben: One thing that can happen is that a supplier is only able to manufacture 20,000 units of an item while the SME is expecting 25,000. This makes for a strained supply chain and solutions must be found, such as compensating for this change by asking suppliers in other countries if they can produce more. It’s a bit like an ecosystem: when there’s a problem in one part, all the players in the supply chain are affected.

Jacques Lamothe: What we actually realized is that, a lot of the time, certain companies have very effective tools to assess the demand on one side, while other companies have very effective tools to measure production on the other side. But it is difficult for them to establish a dialogue between these two parts. In the chain, the manufacturer does not necessarily notice when there is lower demand and vice versa. This is one of the things the C2Net demonstrator helped correct in the use case we developed with the companies.

And what were the companies’ expectations for this project?  

Jacques Lamothe: For the C2Net project, each academic partner brought an industrial partner he had already worked with. And each of these SMEs had a different set of problems. In France, our partner for the project was Pierre Fabre. They were very interested in data collection and creating an alert system. On the Spanish side, this was less of a concern than optimizing planning. Every company has its own issues and the use cases the industrial partners brought us meant we had to find solutions for everyone: from generating data on their supply chains to creating tools to allow them to manage alerts or planning.

To what extent has your research work had an impact on the companies’ structures and the way they are organized?

Frédérick Benaben: What was smart about the project is that we did not propose the C2Net demonstrator as a cloud platform that would replace companies’ existing systems. Everything we did is situated a level above the organizations so that they will not be impacted, and integrates the existing systems, especially the information systems already in place. So the companies did not have to be changed. This also explains why we had to work so hard on interoperability.

What did the work on interoperability involve?

Frédérick Benaben: There were two important interoperability issues. The first was being able to plug into existing systems in order to collect information and understand what was collected. A company may have different subcontractors, all of whom use different data formats. How can a company understand and use the data from both subcontractor A, which is provided in one language and that of subcontractor B, which is provided in another? We therefore had to propose data reconciliation plans.

The second issue involves interpretation. Once the data has been collected and everyone is speaking  the same language, or at least can understand one another, how can common references be established? For example, having everyone speak in liters for quantities of liquids instead of vials or bottles. Or, when a subcontractor announces that an item may potentially be out of stock, what does this really mean? How far in advance does the subcontractor notify its customers? Does everyone have the same definition? All these aspects had to be harmonized.

How will these results be used?

Jacques Lamothe: The demonstrator has been installed at the University of Valencia in Spain and should be reused for research projects. As for us, the results have opened up new research possibilities. We want to go beyond a tool that can simply detect future problems or allow companies to be notified. One of our ideas is to work on solutions that make it possible to make more or less automated decisions to adjust the supply chain.

Frédérick Benaben: A spin-off has also been developed in Portugal. It uses a portion of the data integration mechanisms to propose services for SMEs. And we are still working with Pierre Fabre too, since their feedback has been very positive. The demonstrator helped them see that it is possible to do more than what they are currently able to do. In fact, we have developed and submitted a partnership research project with them.

Also read on I’MTech:

 

Atmospheric reactive trace gases: low concentrations, major consequences

Projets européens H2020Despite only being present in very small quantities, trace gases leave their mark on the atmospheric composition. Since they are reactive, they may lead to the formation of secondary compounds such as ozone or aerosols that have a significant impact on health and the climate. IMT Lille Douai is a partner in the ACTRIS H2020 project, which aims to carry out long-term observations of trace gases, aerosols and clouds to better understand how they interact with one another and how they impact the climate and air quality.

 

Take some nitrogen, add a dose of oxygen, sprinkle in some argon and a few other inert gases, add a touch of water vapor and a pinch of carbon dioxide and you have the Earth’s atmosphere, or almost! Along with this mix composed of approximately 78% nitrogen, an honorable 21% oxygen, less than 1% argon and 0.04% carbon dioxide, you will also find trace gases with varying degrees of reactivity.  Emitted by both anthropogenic and natural sources, these gases exist in concentrations in the nanogram range, meaning 0.000000001 gram per cubic meter of the atmosphere. Does this mean they are negligible? Not really! “Once emitted these gases are not inert, but reactive,” explains Stéphane Sauvage, a researcher in atmosphere sciences and environmental technology at IMT Lille Douai. “They will react with one another in the atmosphere and lead to the formation of secondary species, such as ozone or certain aerosols that have a major impact on health and the climate.” This is why it is important to be able to identify and measure the precise quantity of these gases in the atmosphere.

ACTRIS (Aerosols, Clouds and Trace Gases Research Infrastructure) is a large-scale H2020 project which brings together 24 countries and over 100 laboratories, including IMT Lille Douai, as part of the ESFRI (European Strategy Forum on Research Infrastructure). By combining ground-based and satellite measurements, the aim is to carry out long-term observations of the composition of the atmosphere to better understand the factors behind the contaminants and their impact on the climate and air quality. In terms of innovation, the project seeks to develop new techniques and methods of observation. “At IMT Lille Douai, we have been developing our skills in ground-based observation of trace gases for many years, which has led to our being identified as contributors with extensive expertise on the topic,” says Stéphane Sauvage.

 

Gases that leave a mark on the atmosphere

Trace gases, which come from automobile exhausts, household heating, agricultural activities and emissions from plants and volcanoes, are good “tracers,” meaning that when they are measured, it is possible to identify their original source. But out of the 200 to 300 different species of trace gases that have been identified, some are still little-known since they are difficult to measure. “There are some very reactive species that play a key role in the atmosphere, but with such short lifetimes or in such low concentrations that we are not able to detect them,” explains Stéphane Sauvage.

Sesquiterpenes, a family of trace gases, are highly reactive. Emitted from vegetation, they play an important role in the atmosphere but remain difficult to quantify with current methods. “These gases have a very short lifetime, low atmospheric concentrations and they degrade easily during sample collection or analysis,” says Stéphane Sauvage.

On the other hand, some species, such as ethane, are well-known and measurable. Ethane results from human activity and has a low level of reactivity, but this does not make it any less problematic. It is present at a non-negligible level on a global scale and has a real impact on the formation of ozone. “We recently published an article in the Nature Geoscience journal about the evolution of this species and we realized that its emissions have been underestimated,” notes Stéphane Sauvage.

 

Complex relationships between aerosols, clouds and trace gases

In addition, by reacting with other atmospheric compounds, trace gases can lead to the formation of aerosols, which are suspensions of fine particles. Due to their capacity to absorb light, these particles impact the climate but can also penetrate the respiratory system leading to serious health consequences. “Although natural and anthropogenic sources are partially responsible for these fine particles, they are also produced during reactions with reactive trace gases through complex processes which are not yet entirely understood,” explains Stéphane Sauvage. This illustrates the importance of the ACTRIS project, which will observe the interactions between trace gases and aerosols, as well as clouds, which are also affected by these compounds.

Read more on IMTech: What are fine particles?

The measurements taken as part of the ACTRIS project will be passed on to a number of different players including weather and climate operational services, air quality monitoring agencies, the European Space Agency and policy-makers, and will also be used in agriculture, healthcare and biogeosciences. “The ACTRIS infrastructure is currently being built. We will enter the implementation phase in 2019, then the operational phase will begin in around 2025 and will last 25 years,” says Stéphane Sauvage. This is a very long-term project to organize research on a European scale, drawing on the complementary skills of over 100 research laboratories from 24 countries — to take atmospheric sciences to a stratospheric level!

 

[box type=”shadow” align=”” class=”” width=””]

A workshop on data from observations of reactive trace gases

Engineers and researchers from ten European countries met at IMT Lille Douai from 16 to 18 May for the annual ACTRIS project workshop on reactive trace gases. The objective was to review the data collected in Europe in 2017 and to discuss its validity along with the latest scientific and technical developments. All the players involved in making ground-based measurements of trace gases, aerosols and clouds will meet at IMT Lille Douai in October. Learn more

[/box]